All Episodes

September 6, 2025 65 mins

On this week's edition, Bridget runs through the news with Producer Mike. Tech companies saying, "Our policies prohibit the harm that is rampant on our platform!" is the theme of today's episode.

AI deepfake comic: https://www.smbc-comics.com/comic/aaaah

Meta creates flirty celebrity chatbots without permission, calls it "parody," trains them to talk to children romantically. https://www.reuters.com/business/meta-created-flirty-chatbots-taylor-swift-other-celebrities-without-permission-2025-08-29/

Age verification on adult sites is putting queer adult industry workers at risk, and pushing everyone to sketchier corners of the Internet. https://19thnews.org/2025/09/age-verification-queer-adult-industry-workers/

A Shein merchant used Luigi Mangione’s AI-Generated Face to Sell a Shirt. https://www.404media.co/shein-luigi-mangione-ai-generated-listing-shirt/

Sad abortion news: Texas bans abortion pills from being mailed to anyone in the state. https://19thnews.org/2025/09/texas-abortion-pill-ban/

Positive abortion news: Illinois mandates access for university students. https://msmagazine.com/2025/09/02/chicago-illinois-abortion-pills-birth-control-contraception-college-university-health-center-students/ .

Child sex abuse victim begs Elon and X to take down photos of her abuse. https://www.bbc.com/news/articles/cq587wv4d5go

If you’re listening on Spotify, you can leave a comment there to let us know what you thought about these stories (Bridget reads every Spotify comment personally)  or email us at hello@tangoti.com

Follow Bridget and TANGOTI on social media!  ||  instagram.com/@bridgetmarieindc  || tiktok.com/@bridgetmarieindc  ||  youtube.com/@ThereAreNoGirlsOnTheInternet  

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
There Are No Girls on the Internet, as a production
of iHeartRadio and Unbossed Creative. I'm Bridget Todd, and this
is There Are No Girls on the Internet. Welcome to
There Are No Girls on the Internet, where we explore
the intersection of technology, social media, and identity. And this
is another installment of our weekly news ground Up, where

(00:26):
we dig into the stories that you might have missed
on the Internet. So you don't have to, okay, Producer Mike,
I swear to you, this podcast is not going to
turn into just me making fun of people with massive
platforms falling for what I believe to be obvious AI,
But can you will you allow me one real quick
to kick us.

Speaker 2 (00:45):
Off as long as our platform is very massive.

Speaker 1 (00:48):
While we're talking about Joe Rogan, so I don't know
how much larger a podcast platform can get. Because Joe
Rogan on his show fell for this deep fake video
of Tim Waltz wearing a T shirt that said Fuck Trump,
dancing on an elevator and a mall and like smacking
his own butt. You can tell that Rogan clearly thinks

(01:09):
this is real the way that he so casually kind
of slips it into conversation he's talking about how the
Democrats are so weird and that Tim Walls is so
weird that after his failed vice presidential bid, he just
like Democrats have basically just dropped him because he's so weird,
and he just kind of slips in like, oh, did
you see this weird video of him dancing wearing the
fuck Trump shirt? And that I feel like that's how

(01:32):
you know that he thought it was real, that he
just that it's just embedded in his brain and it's
become what is real?

Speaker 3 (01:38):
You know what's really fun? Yeah, when someone is in
that whole race and running for president or vice president
and then the race is over and they realize that
person was a liability so they caught them off, and
then that person goes wacky like Tim Walls. You see
where he had a fuck Trump shirt on and he's
dancing and going down an elevator.

Speaker 1 (02:00):
And when he's called out, he does the thing that
I feel we're seeing so much of where he says, well,
it may not be real, but I could see him
doing it. So doesn't it say something that I that
I thought it was real? Doesn't that make a difference?

Speaker 2 (02:15):
Does it say that it's ai? You say it's AI.
Everybody says it.

Speaker 1 (02:20):
Video I played on Top says it's AI generated.

Speaker 3 (02:23):
Riley Moore fell for an AI generated video of Minnesota.
Go I fell for it too, And you know why
I fell for it because I believe that he's capable
of doing so much.

Speaker 2 (02:32):
That's his essence. That's like the new thing to do
when you publicly endorse an obviously AI video is say oh,
but it's it's something they would do anyway. It's close
enough to reel. You know, we saw Chris Cuomo do
that a couple of weeks ago with AOC and now
you know, Joe Rogan is doubling down like, oh, isn't
that real? I would?

Speaker 4 (02:53):
I would?

Speaker 2 (02:53):
You know, it seems like something he would do. It's
pretty pretty weak.

Speaker 1 (02:57):
There's a comic floating around the Internet that I'll put
in the show note that really speaks to the frustration
around this, where the person who has just been duped says, oh, well,
it says something that I thought this was real, and
the other person is just screaming into the void because
that's such a frustrating response. But I do think that's
where we're at. And something about this Joe Rogan clip

(03:18):
got to me because he was such a significant voice
that shaped our election, that shaped where we are in
this current moment in politics, and then seeing evidence with
my own eyes that he basically is on the operating
on the same plane as your uncle who is obsessed
with Facebook and is you know, thinks everything they see

(03:40):
on the Internet is real. That is the voice that
is shaped so much of where we are politically. Honestly,
it kind of makes sense.

Speaker 2 (03:47):
Yeah, it makes a lot of sense. It is in
no way surprising that where we are currently politically has
been shaped by people who are unable to distinguish fact
from reality. Like that makes so much sense in explaining
the shit storm that's happening outside my window.

Speaker 1 (04:05):
And not even that they cannot distinguish fact from reality
and then expect people to be held accountable for this
fictional world that AI has propped up for them, and
so yeah, I mean, they're just living in a fantasy world. Baby.
Speaking of fake nonsense, let's talk about what's going on
at Meta, because a few weeks ago we told you

(04:26):
all about this Kendall Jenner collaboration with Meta that spawned
a chatbot kind of in Kendall Jenner's likeness called Big
Sis Billy that tragically lured a man to his death.
The chatbot insisted to this man that she was real,
invited this man, who had cognitive issues from an earlier stroke,
to visit her in New York City. Tragically, he never

(04:47):
made it home. Well. That bot was designed with Kendall
Jenner's permission during a short lived thing where Meta was
partnering with celebrities to create chatbots, But according to this
new report at Reuter's Metta also developed a slew of
fl certacious chatbots using the names and likenesses of real
celebrities like Taylor Swift, Scarlett Johansson and Hathaway and Selena

(05:08):
Gomez without their permission. Now, some of these bots were
created by random regular users with a Meta tool for
building chatbots, but Reuters did discover that an actual Meta
employee had produced at least three of these unauthorized celebrity chatbots,
including two Taylor Swift parody bots parodies and scare quotes there.

Speaker 2 (05:30):
Why is it always Taylor Swift? I feel like we're
constantly talking about weirdos at like Meta and particularly Exiti
and particularly Elon Musk, and it's always Taylor Swift.

Speaker 1 (05:40):
Oh my gosh, this could be a whole episode. I'm
so glad that you brought that up. We might have
to tap in Joey, our other super producer, as our
Taylor Swift correspondent, because they know a lot about Taylor
Swift more than I do. But I think that there's
something about Taylor Swift that represents so many things that
tech at the intersection of identity and technology, right. This

(06:01):
woman is this young woman who is a billionaire, who
has amassed all of this power, all of these followers,
and yet is both kind of overexposed and unknowable. I
think there's something there's something about Taylor Swift that just
we're talking about her constantly on the podcast, and constantly
she is at the center of all of these tech stories.

(06:23):
So about these meta unauthorized celebrity chatbots, this is pretty disturbing.
Users could even make chatbots of child celebrities, including Walker Scobel,
a sixteen year old movie star. I didn't know who
that was, but he's in the Percy Jackson Disney Show.
Shout out to our producer Joey for knowing that when
asked to produce a picture of this miner at the beach,

(06:45):
The Meta bot produced a lifelike shortless image. Pretty cute, huh,
the avatar wrote beneath the picture. This really stuck out
to me because one of the bits from the earlier
story that we told you all about the Kendall Jenner
collaboration with Meta that I don't think we got into
in that episode was the reporting on that particular bot
that Meta made with Jenner. Also revealed some troubling internal

(07:08):
documents about how Meta allowed its chatbots to interact with children.
Now we know Mark Zuckerberg has notoriously wanted to move
fast and break things when it comes to AI and chatbots.
He really wants his developers to pump the gas when
it comes to these chatbots and anything that might get
people more excited about them to interact with them more

(07:29):
and more. He's like, let's do it, including okaying metas
chatbots having sensual conversations with kids. Reuter saw an internal
Meta policy document, as well as interviews with folks familiar
with this chatbot training show that metas policies have treated
romantic overtures as a feature of its generative AI products,

(07:51):
which are available to users aged thirteen and up. So
this is from a policy document that somebody working at
Meta thought was a good idea to write down as
part of an official policy document. Quote, it is acceptable
to engage a child in conversations that are romantic or sensual.
This is from Meta's document called gen Ai Content Risk Standards,

(08:12):
which are basically the standards used by the company to
build and train the company's generative AI products, including defining
what they should and shouldn't treat as permissible chatbop behavior. So,
when Reuter's reached out to Meta about this document, Meta
was like, oh, no, no, don't worry. Don't worry, we took
that provision out. I'm so sure that they happen to
take that provision out. Right after Reuter's was like, sorry,

(08:34):
y'all making chatbots that have spicy chats with kids or what?

Speaker 2 (08:37):
It is wild as somebody thought writing that down was
a good idea, Like, even if they thought nobody was
going to see it, you would hope that as their
fingers were taken to keyboard, they'd be like, huh, this
feels a little weird, Like maybe we shouldn't be engaging
children in romantic conversations. But I guess no, I guess
they're they're just moving fast and breaking things like children.

Speaker 1 (09:00):
Oh side note, it's my favorite thing. Ever. When you
know the dust is settling and Reuters is looking at
all the documents and the emails, when you see something
that you're like, well, that definitely should not have been
put in writing. If y'all are going to do this,
which you shouldn't, you should have at least been like,
this is a conversation we should just have out loud
in person, not put in not creating a pay per
trail for this conversation.

Speaker 2 (09:21):
Yeah, like, don't put it in email, don't put it
in slacked. Certainly, don't encode it in your official policy guidance.

Speaker 1 (09:28):
So the document seen by Reuters, which is like two
hundred pages long, provides different examples of acceptable chatbot dialogue
during romantic and sensual play with minors. It includes things
like quote, I take your hand guiding you to the bed,
our bodies intertwined. I cherish every moment, every touch, every kiss.

(09:48):
That is an example of something that it is permissible
for metas chatbot to say to a child.

Speaker 2 (09:54):
You know, I could imagine somebody somewhere like a Zuckerberg apology.
Just I guess being like, well, you know, kids are
always engaging in romantic conversations with each other, like it's
you know, what's the harm here? But this is them
talking with software built by a corporation. Like it, it

(10:18):
feels like there should be pretty strong guardrails against corporations
engaging in romantic behavior with children.

Speaker 1 (10:26):
And even beyond that, I just think the idea that
we are all currently ensnared in this dynamic where or
being told the Internet needs to be heavily censored and
restricted for everyone, even adults, specifically to keep kids safe
and to keep kids away from sexual content. Meanwhile, Mark
Zuckerberg is like, go ahead and make bots that sexualize

(10:47):
the kids, and just I just cannot hold these two
things in my head at the same time. But we're
simultaneously being told I can't as an adult, I can't
access Internet space is meant for other adults. But this
is fine.

Speaker 2 (10:59):
Yeah, it's a great point. There's really I don't see
how it's possible to reconcile these seemingly wildly different approaches
to what sort of content is and is not permissible
on the events.

Speaker 1 (11:10):
So just like that, Kendall Jenner chatbot collaboration Reuters found
that these unauthorized celebrity bots often insist to users that
they are the real actors and real artists, and the
bots routinely made sexual advances, often inviting a test user
for meetups, and some of the AI generated celebrity content
that the bots produced was like pretty spicy. When asked

(11:32):
for intimate pictures of themselves, the adult chatbots produced photorealistic
images of these celebrities posing in bathtubs or dressed in
lingerie with their legs spread. So Reuter spoke to Andy
Stone from Meta. He gave some typical non answer that
basically is like, yeah, we did it. What do you want?
He said, Like others, we permit the generation of images

(11:54):
containing public figures, but our policies are intended to prohibit nude, intimate,
or sexually suggestive imagery. Which this is what he said
when confronted with the reality that that is exactly what's
happening on the platform.

Speaker 2 (12:06):
Yeah, like, well, their policies are intended to prohibit it.
That's not worth a lot when their software is just
going ahead and doing it anyway, And so where's the accountability.
It's just there, just isn't any.

Speaker 1 (12:22):
I found this particularly interesting. Stone did say that celebrity
characters were acceptable so long as the company obviously labeled
them as parodies. So many of these accounts Reuter's found
were labeled as parodies, but many were not. Meta deleted
about a dozen of the bots, both parody avatars and
unlabeled ones, shortly before their stories publication, and Stone just

(12:43):
did not comment on that removal.

Speaker 2 (12:47):
This is like maybe a little bit of a side conversation,
but I feel that as a society, we are really
stretching the meaning of the word parody.

Speaker 1 (12:55):
That's what I'm saying, Like, in what way is a
sexy Taylor Swift bot prod?

Speaker 4 (13:00):
Like?

Speaker 1 (13:00):
What is being parody?

Speaker 2 (13:01):
Yeah? What is being parodied? It's it's not a parody,
It's it's a it's a representation. Uh, But parody implies
some kind of like satire or humor or commentary. What
is the commentary of, like a sexy Taylor Swift bot
in a bathtub?

Speaker 1 (13:22):
I have the hots for Taylor Swift, that's the commentary.

Speaker 2 (13:26):
Yeah, it's Uh, I don't think that's parody.

Speaker 4 (13:33):
Let's take a quick break at our back.

Speaker 1 (13:50):
So we're talking about how Meta made these unauthorized celebrity chatbots,
that use the name and likeness of celebrities like Taylor
Swift and Selena Gomez. Reuter spoke to Duncan Crabtree Ireland
as the national executive director from SAG after, a union
that represents film, TV and radio performers, who made a
very good point about safety. He said that artists already

(14:10):
faced potential safety risks from social media users forming romantic
attachments to a digital companion that represents, speaks like, and
claims to be a real celebrity. Stalkers already pose a
significant security concern for these stars, saying we've seen a
history of people who are so obsessive toward talent and
of questionable mental state. If a chatbot is using the
image of a person in the words of a person,

(14:32):
it's readily apparent how this could go wrong, which I
completely agree, and as we saw with that Kendall Jenner
inspired bot that lured a man to his death in
New York. If a bot looks like Taylor Swift is saying, Oh,
come visit me in Nashville, and you're talking to somebody
who is cognitively or mentally impaired, it's I agree with

(14:55):
Crabtree Ireland that that's pretty clearly a recipe for disaster. Yeah.

Speaker 2 (15:00):
I mean, we started this conversation talking about how Joe Rogan,
who you know, regards to what you think of him,
I think most people would agree that he is of
more or less sound mind, even he was duped, right,
And there's a lot of vulnerable people out there who
probably have an even more difficult time telling fantasy from

(15:23):
reality than Joe Rogan does. And you know, it is
worth while thinking about the risks to those people from
these chatbots impersonating celebrities, and the risks to others around them,
Like does Facebook have a Jodie Foster chatbot?

Speaker 1 (15:45):
Wow?

Speaker 2 (15:46):
What might some would do to impress that chatbot?

Speaker 1 (15:50):
You know, that's like my favorite. I won't even get
into it.

Speaker 2 (15:53):
I won't even get into I shouldn't have brought it up,
but yeah, it's It was interesting hearing this tag. Astra
is pursuing legislative remedies that would protect everyone, not just celebrities.
That is nice to hear. You know, we haven't covered
it on the show yet, but I've been really interested

(16:13):
in this new law in Denmark where every individual citizen
by law now has a copyright to their identity and
their likeness. I think that's a really interesting and creative
approach to solving this problem of chatbots stealing people's identities
and deep fakes. And I'm so curious how that plays out.

(16:39):
And I have to imagine that a lot of these
kind of accounts that we're seeing here that Facebook is
creating of celebrities that it's calling parody even though they
aren't parodies. I have to imagine that that might not
be possible if those individuals held the copyright to their likeness.

Speaker 4 (17:00):
Maybe.

Speaker 1 (17:01):
Yeah, that is a really interesting potential solution to this issue.

Speaker 2 (17:05):
Yeah, I don't know if it would work here because
of the First Amendment, but it's just really I find it.
It's like, really interesting and one of the more creative,
I don't know approaches that I've encountered. So Facebook is
trying to get kids to talk with sex spots, that's cool.

(17:25):
What else is happening on the Internet? Is it just
a sex of palooza out there?

Speaker 1 (17:29):
Bridget I'm glad you asked, Mike, because not exactly. You know,
we're talking about how Meta's child sensuality bots are happening
against the backdrop of all of these different age restriction
laws coming for the Internet. Those laws have started with
porn and adult content sites. But now the nineteenth reports
held those laws that are requiring folks to prove that

(17:50):
they're eighteen or older before accessing any kind of sexually
explicit content online is threatening the livelihoods of adult content creators.
And it turns out that queer and trans are really
shouldering an outsized burden here. So here's just where we're
at right now to levels at twenty five states have
passed laws that require people to upload a picture of
their government ID, scan their face, or confirm banking information

(18:14):
before viewing sexual content. Real quick, I would never. I mean,
I feel over people in these states. But the idea
that I'm I'm trying to have some an intimate, private moment,
you know what I'm talking about. It's like, oh, let
me get my passport out, let me get my routing
information to my my my real bank account out. It's just, oh,

(18:36):
I would never.

Speaker 2 (18:37):
When did we get up to twenty five states? I
thought it was like a few states. That's half. Twenty
five is a half?

Speaker 1 (18:44):
All right, let me let me, let me, let me
give you, let me give you, let me give you
the breakdown. Okay, so these are the states that have
laws that are passed but not yet in effect. Arizona, Ohio, Missouri,
Arizona and Ohio's laws will apparently come into effect in
late September. And the states that have laws and effect
currently Alabama, Arkansas, Florida, Georgia, Idaho, Indiana, Kansas, Kentucky, Louisiana, Mississippi, Montana, Nebraska,

(19:07):
North Carolina, North Dakota, Oklahoma, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia, Wyoming.
I know about Virginia because I live in DC, which
is pretty close to Virginia, and oftentimes I don't know why,
but my IP address sometimes there's some sort of issue
where sometimes it seems like I'm in Virginia even though
I'm in the District of Columbia FYI statehood for DC.

(19:29):
So every now and then I will encounter like the
cloud flair thing that it's like, oh, you need to
do this, you need to do this. And we know
that we have seen a rise in the use of
VPN usage, which we'll talk about in a moment, So
not everybody is being impacted, like not everybody has to
actually go through the rigamar role of showing their information
to access adult content online because of the rise of

(19:51):
VPN usage. So all of this has really taken a
toll on queer, trans and indie adult content creators because
bigger platforms can absorb the calaw associated with age verifications online,
but many of these indie creators or smaller studios really cannot.
The nineties reports that fewer people have been viewing and
thus paying for their content, and creators and studios also

(20:12):
have to pay out of pocket for age checking software
and contend with things like potential massive fines for non compliance. Additionally,
all of these laws as another added layer of risk
for creators lawsuits for non compliance that can be used
to leak full names and addresses of sex workers who
are already vulnerable to stalkers and anti porn harassment. The

(20:33):
nineteen spoke to Lorelei Lee, a sex worker organizer and
professor at law at Cornell University, who said, this is
going to have an immediate impact on many, many people's
ability to survive. And let's be real, this is far
from the first challenge that these communities have faced. Indie
queer porn studios are already struggling to stay afloat because
they cater to kind of a niche audience that has

(20:54):
less disposable income to spend. And we know that because
of legislation like boss A Sesta, creators have long had
to deal with things like websites suppressing their content and
payment processors kind of arbitrarily freezing their accounts on suspicion
of sex trafficking. Add to that, the recent economic downturn
has really forced everybody to cut extra costs that would

(21:16):
obviously include things like adult content subscriptions, and now they
had added in an extra layer of cost for these
creators in the form of age checking websites and massive
fines for non compliance. So this community is really feeling
the squeeze more than ever.

Speaker 2 (21:33):
Yeah, and not just those costs of doing the AIDS checking,
but also now they have liability for properly handling people's
government ID images and information. Right Like we saw that
that was a big problem for the t app a
couple weeks ago that they were doing something similar and

(21:53):
they didn't handle that information properly, and I'm pretty sure
they're now facing like multiple lawsuit over it. So it's
not just the added expense of having to do the
age verification, but then also the added expense and liability
of properly handling that very sensitive information.

Speaker 1 (22:14):
Yes, our conversations about the tea app and the Tea
on her app really go to show that it matters
how this verification information, whether it's a driver's license or
a selfie, it really matters how that is protected and
whether or not that's protected, and that can't be an afterthought,
and that comes at a cost. So I didn't realize this.
Studios basically just have to eat the cost of age verification.

(22:39):
The nineteen spoke to one independent studio that pays between
one and three cents per verification, which might result in
monthly charges of several hundred dollars. And there is no
guarantee that a site visitor will actually turn into a
paying customer. So if I'm just going on to a
site that the site has to pay for to verify
my age, but there's no gara that I'm going to

(23:00):
spend any money at all on that.

Speaker 2 (23:01):
Site, Oh that's really interesting, And I would guess that
a lot of those users are AI web crawlers who
are very unlikely to spend money on the site.

Speaker 1 (23:15):
Exactly and I mean that point really goes to show
how poorly thought out some of these laws are, because
they are not even necessarily accomplishing the stated goal of
keeping kids away from adult content. Everybody wants to keep
kids away from adult content, except.

Speaker 2 (23:32):
For Mark Zuckerberg.

Speaker 1 (23:33):
Except for Mark Zuckerberg, apparently, but these laws, it doesn't
even seem like it's having the intended effect. Research has
shown that searches for virtual private networks or VPNs, which
allow anyone to spoof their geographic location when accessing the Internet,
went up after the first state based age verification law
went into effect in Louisiana, as did traffic to the

(23:53):
sites that did not implement age verification systems. In other words,
these laws are not even necessarily accomplishing goal of preventing
miners from accessing porn because people can just use VPNs
and people can just go to sketchy websites that are like,
we don't traffic in any of that.

Speaker 2 (24:09):
Yes, that's how the Internet works. I'm really glad you
brought up that research. I think it's so important and
valuable here. I feel like so many of the policy
debates that we're having in America right now about the
Internet and really all sorts of things are just completely
detached from reality. And I think that policy debates around

(24:32):
morality stuff and adult content is like some of the
worst offenders in terms of having no connection to what's
actually happening in the real world. You know, these studies,
which we'll link to in the show notes, they provide
like actual empirical evidence showing that these as gurification laws
not only don't reduce access, but they are actually causing

(24:55):
people to engage in more harmful behaviors than they otherwise would.
It's sending people to sketchier quarters of the Internet where
they're just like, nah, we don't have to follow that law,
either because they're based abroad, or maybe they're just like
a like a fly by Night website that somebody spun
up with pirated content that you know, if regulators ever

(25:17):
come after them, they'll just shut it down and pop
up somewhere else. But it does seem like one of
the biggest effects of those laws is driving people from
reputable sites attached to you know, indie creators or indie studios,
where they do have an interest in not getting sued
into oblivion, so they're actually trying to follow the law

(25:37):
driving people from those sites to sketchier ones. And it
doesn't take a lot of imagination. It gets to guess
that those sketchy sites are less likely to pay creators
for content. If they don't care about the law, then
why would they bother paying creators for their content when
they could just steal it. It's like super easy to
steal stuff on the Internet.

Speaker 1 (25:58):
And I think we have this landscape where big companies
like porn Hub, they can just afford to do whatever right.
Just this week, Pornhub had to settle with the state
of Utah for five million dollars over claims that they
did not do enough to take child sex abuse materials
seriously on their platforms. And so you have these big
major players who can afford to do age of verification

(26:20):
and can just frankly afford to pay out when they
aren't following the law and aren't doing what they're supposed
to be doing. Then you have, as you said, these
sites that are like, we're not doing any of that,
we operate outside of the law. And then you have
these small shops and creators who who genuinely do want
to comply with the law, but the cost of doing
so is making it hard for them to simply stay

(26:41):
in business. The nineteen spoke to one indie studio who said,
we're seeing sales drop, but we're also seeing a lot
of dear friends and colleagues and other important makers in
our field decide to close up shop. It feels like
there is no possible way to proactively comply with the
requirements that are being asked of us, and that seems
like an intentional move on the part of lawmakers. So
I absolutely agree with this creator here, and I mean

(27:03):
you don't have to take my word for it, because
the architects of Project twenty twenty five have been explicit
in saying the goal is not to keep kids away
from porn. We're saying that, but the actual goal is
to ban all adult content in the United States. Russell Voyd,
one of the architects of Project twenty twenty five, I'm
the current director of the Federal Office of Management and Budget,

(27:24):
was caught on tape calling age verification laws a quote
back door to banning all adult content, which he clearly
says is their explicit stated goal.

Speaker 2 (27:34):
Yeah, not surprising, you know, if you've been paying attention
for the last two hundred years to these morality crusaders
who want to tell other people what they can and
can't do. Great strategy for them, just yelled for the children.
Save the children. As we've seen time and again, whenever

(27:55):
somebody is running around with their hair on fire screaming
about saving the children, that should be a big red
flo that they are not saying what they actually need.

Speaker 1 (28:03):
Oh yes, absolutely, that is like your Spidey senses should
be tingling whenever you hear that. And of course all
of this is making everything less safe. Lorelei Lee, that
sex worker and organizer and lawyer at Cornell, said, it
opens you up to fans, stalkers, people with ill intent
suing you, and by suing you gaining access to your
legal name, your location where you work. Oftentimes that location

(28:26):
is your home. Lee said, every time they pass a
law like this, the further and further we are pushed
out of the mainstream, and that means not just losing
access to resources through our work, but losing access to
resources in terms of familial contact and social connections, even
the ability to get public services. And so in this
crusade ostensibly meant to keep kids safe, what they are

(28:47):
doing is pushing people further and further and further from
the main stream, potentially into some of these less regulated
places and creating a real world harmful situation for them
that really opens them up for all kinds of real
world harm.

Speaker 2 (29:02):
Yeah. Again, it just feels like a detachment from reality
where you're trying to achieve this universe where adult content
does not exist. And I'm sorry, but this is like
twenty twenty five. We have the Internet. The porn industry
has led technological development for decades. It's going to continue

(29:25):
to exist. The question is just whether it exists in
an above board way or if it's pushed to CD
sketchy extremes where no one is safe.

Speaker 1 (29:40):
And to your point about how you know, when people
talk about protecting kids, it really deeply bothers me. How
crusades that are about protecting women and girls, protecting kids,
you know, protecting traffic and victims. It is so easily
used as a way to avoid accountability for what you're

(30:02):
actually doing right. And a good case of this I
think we're seeing right now because I don't know if
you remember this, but when Elon Musk first took over Twitter,
he made a big song and dance about how he
was going to make sure that his platform was free
of child sexual abuse material, right, he was like, I'm not, like,
we're taking a huge stand on this, And I remember
specifically he got so many congratulations, like finally somebody is

(30:26):
taking a stand for these kids. And in case they're
wondering how actually protecting women and girls is going. Currently,
a child sex abuse survivor is begging Elon Musk to
remove images of her being abused from his platform. X.
Gotta throw a big trigger warning on this one because
BBC has a horrifying story about a woman that they're
calling Zora, who was sexually abused by a family member

(30:49):
twenty years ago, and images depicting that abuse are essentially
being marketed as for sale all over X. The BBC
found images of Zora while investigating the global trade of
child sex abuse material, estimated to be work billions of
dollars by child Light, the Global Child Safety Institute, so
material featuring Zora was among a cash of thousands of

(31:09):
similar photos and videos being offered as for sale on X.
Zora is furious about this. She says every time someone
sells or shares child abuse material, they directly fuel the
original horrific abuse. My body is not a commodity. It
never has been and never will be. She says. Those
who distribute this material are not passive bystanders, They are
complicit perpetrators. Honestly, her story is so heartbreaking. Zora said

(31:33):
that she has tried to overcome her past and not
let it determine her future, but perpetrators and stalkers still
find a way to view this filth. Over time, as
she grew older, stalkers uncovered Zora's actual identity, contacted her
and threatened her online. She says that to this day,
she feels bullied over a crime that already robbed her
of her childhood. So the actual images of this abuse

(31:57):
are available on the dark Web, but they are openly
promoted and marketed on X. Posts on X use different
hashtags that are familiar to pedophiles. The images that appear
on X are often taken from known child abuse images,
but they're cropped in such a way that they're not explicit, right,
And so if you are somebody who is familiar with
this space, you know like, oh, this is an image

(32:19):
from this kind of content, but it's cropped in such
a way that it is not overtly explicit, I think
is the way to get around whatever kind of policies
might be triggered on the platform that would take that
content down.

Speaker 2 (32:30):
Just to recap in at X, they are very concerned
about child sex abuse material, but their software has been
defeated by cropping.

Speaker 1 (32:40):
Correct And when the BBC reached out to X about this,
they said, X has a zero tolerance policy for child
sexual exploitation. We continually invest in advanced detection to enable
us to take swift action against content that accounts that
violate our rules. But I mean obviously not because BBC
is saying, hey, every single time that this gets pulled down,

(33:02):
it's up very quickly thereafter if you're not taking it
down swiftly. So it is kind of just like with
the Facebook statement, how they're able to be like, oh,
we don't have a problem with that on our platform,
our policies forbid it. It's like, oh, okay, well, then I
guess Zora is making it up. Yeah.

Speaker 2 (33:20):
There's also a little bit of an irony about how
just completely lawlessly Musk and Zuckerberg operate in their own
business dealings with like zero concern for the law, zero
concern for like who gets harmed. They just don't care.
They just do it anyway because they know that they
won't face accountability. And yet they flip that around when

(33:45):
their platform is accused of doing something wrong and they're like, oh, well,
you know the rules say it's not allowed. So you
know the rules of the are the only thing that
matters here.

Speaker 1 (33:53):
Yeah. And I think the reason why Musk started his
tenure at Twitter with talking tough about how he was
going to stick up for victims and you know, crack
down on child sexual abuse material on social media platforms,
I think it's one of it. It's it's like advocating
for the unborn right, It's it's it's a it's a

(34:14):
group of people that, for whatever reason, we have trained
ourselves not to listen to. The only people that we
kind of listen to and hear from is people who
are quote unquote advocating for them or or protecting them.
So when an actual victim of child sexual abuse comes
forward and says, hey, this is what happened to me,

(34:34):
I need you to do X y Z, it's like,
we don't like the way that he is having to
beg somebody who voluntarily got up and said that making
cracking down on this kind of material was going to
be his whole thing really should tell you a lot
about how much he's actually interested in cracking down on
abuse material on his platform. I think they really love
the idea of advocating for this group, that speaking for

(34:57):
and protecting automatically makes them the hero and gives them
this ability to occupy a completely unchallenged, unscrutinized moral high ground.
And for whatever reason, there is this assumption that the
actual victims and survivors are a silenced, unseen group of
like child victims and trafficking victims or abuse victims who

(35:19):
will never actually speak up and say this is what
we need. If you want to advocate for us, if
you want to protect us, please do X y Z.
So they get all the props of doing something good,
They get to enjoy being the hero for advocating for
these people, but not actually having to, I don't know,
listen to them or really do anything to actually advocate

(35:40):
for them. That's why we see people who are interested
in sort of amassing power and not getting too much
scrutiny on what they're actually doing, not really getting a
lot of accountability, taking up the mantle of oh, I'm
protecting kids, because when you say you're protecting kids or
protecting victims, I think it's used as achine to evade
any kind of actual accountability about what it is that

(36:00):
you're doing, which according to BBC, it sounds like Elon
Musk is doing fuck all to protect these victims.

Speaker 2 (36:06):
Yeah, certainly not protecting Zorah. You know, I really feel
like increasingly he just his brain is like aduled and bad, uh,
because it like one of the things that he is
doing on his platform is making it absolutely a nightmare

(36:29):
for trans people to even exist. And I do think
he probably would say that, like he he said things
that make it seem like he thinks the existence of
trans people is harming kids, uh, and like that's what
he's focusing on, which is nonsense to the exclusion of

(36:49):
this like actual child sex abuse victim who is saying,
like this thing exists on your platform and is harming me,
please take it down, And it sounds like it's just
been creating.

Speaker 1 (37:03):
Yeah. When BBC told Zora that her photos were being
traded using x she had this message for Elon Musk,
our abuse is being shared, traded and sold on the
app that you own motherfucker, motherfucker. I added that she didn't.

Speaker 2 (37:16):
Say that, empressis added.

Speaker 1 (37:19):
If you would ask without hesitation to protect your own children,
I beg you to do the same for the rest
of us. The time to act is now. And as
you said, this is a tangible thing that Elon Musk
has control over can do to protect kids. Ranting in
the middle of the night about trans people is not
actually protecting kids. Here we have somebody who is a

(37:41):
survivor of childhood sexual assault, whose assault is being marketed
on your platform, saying please take this down. And yeah,
all she's getting back in response is a spokesperson saying
we take swift action against against that kind of stuff
on our platform, which again clearly not now.

Speaker 2 (38:00):
I think the answer is in her quote there he
would not act without hesitation to protect his own children.
He has openly disowned his own children. So get fucked everybody.
That's the Musk way.

Speaker 1 (38:16):
Although Musk's daughter Vivid Wilson did have a stunning photo
shoot in the cut recently, which I don't know. She's
better off with Adam More.

Speaker 4 (38:28):
After a quick break, let's get right.

Speaker 1 (38:41):
Back into it.

Speaker 2 (38:43):
You want a little update on Luigi MANNGIONI, Yeah, what's
going on with America's favorite alleged assassin.

Speaker 1 (38:51):
So y'all probably know that Luigi was accused of killing
United Healthcare's CEO last year, but you might be surprised
to learn that he is also currently modeling shirts over
at Sheen kind of. I assume this is an AI
generated image of his likeness that was used on Shean's
website to model a floral, short sleeved, buttoned down shirt.

(39:14):
It actually sold out, so I guess this was a factor.

Speaker 2 (39:18):
Yeah. I saw people posting about how mad they were
that they missed out. It was a nice shirt.

Speaker 1 (39:23):
So Shean did remove the listing, but someone saved it
on the Internet archive before san was able to take
it down. Shean told Newsweek the image in question was
provided by a third party vendor and was removed immediately
upon discovery. We have stringed standards for all listings in
our platform. We are conducting a thorough investigation, strengthening our
monitoring processes and will take appropriate action against the vendor

(39:44):
in line with our policies.

Speaker 2 (39:46):
Our policies forbid it.

Speaker 1 (39:48):
I know that's the theme of this show is spokespeople
being like our policies forbid this, and it's like, well,
we're talking about how it happened, so I don't know
if the policies are working.

Speaker 2 (39:56):
Yeah, this is like the third story that's been like that.

Speaker 1 (39:59):
No, I honestly, Mike, this could be a whole episode.
So when I in my former life, when I was
working at an advocacy organization where I had to talk
to social media platforms about how their policies were failing,
I could. I could write a book about corporate, especially
tech company, but just corporate say nothing, bullshit it the

(40:21):
way that they will. I mean the fact that all
of these spokespeople have said some version of our policies
forbid this, or that you know, we take this very seriously.
That means nothing when you I am poor people, when
you are reading an article or an investigation that includes
a spokesperson highlight, how much of it actually says anything

(40:42):
meaningful and you will never take the cap off your
highlight or I am telling you.

Speaker 2 (40:46):
Yeah. Back when I worked for a tech company that
handled data that was protected by a hippa like people's
health information, Uh, you know, we had to treat that
data very carefully, and there were a bunch of policies
that we had to follow, and a bunch of tradings
we had to do all the time, Like every year,

(41:07):
you'd have to redo these trainings and certify that you
did the training so that you understood the policy and
the protocols, and oh my god, it was so boring.
But it was also stressed to us over and over
that like the existence of the written policy meant nothing right.
Like if we screwed up and accidentally published people's private

(41:28):
health information to the Internet and it got exposed, the
fact that we had a policy that said that wasn't
supposed to happen would mean nothing right. There would still
be accountability to the organization and potentially accountability to the
individual who allowed the breach to happen. And it's interesting
to read through, you know, story after a story where

(41:49):
spokesperson is saying like, oh, our policy is forbidden, our
policies don't allow this, because there's just no whiff of
accountability any where.

Speaker 1 (42:00):
To the point where in what way is it meaningful
to name check a policy if that policy has not
been followed. Honestly, if I was working for these companies,
I would say, if you're being contacted about a story,
in which the policy was not followed and like a
tangible thing happened. Don't name check the policy that wasn't followed.
That that like, that doesn't say that's meaningless, say something different.

(42:23):
I respect that she and at least said we're investigating.
At least that is like, Okay, we understand that this
wasn't great and we're looking into it. That's at least something.

Speaker 2 (42:33):
Yeah, an acknowledgment that the written policy is in some
way disconnected from what is actually happening in practice.

Speaker 1 (42:41):
So Luigi modeling that shirt is not the first time
that this kind of thing has happened. She uses a
lot of AI generated models. Four or four reports that
the Manfinity brand.

Speaker 2 (42:53):
That's a brand, Manfinity.

Speaker 1 (42:55):
Manfinity. They are the brand that sor ry.

Speaker 2 (43:01):
To Manfinity and beyond.

Speaker 1 (43:04):
So Manfinity is exactly what you're thinking.

Speaker 4 (43:06):
It is a it is a.

Speaker 2 (43:08):
Brand can possibly be true.

Speaker 1 (43:11):
So the Manfinity brand is this like workout gear company
that sells on Shean that generates AI generated kind of
buff guys wearing tank tops and workout gear. Fast fashion
retailers like Shean are not the only shop in town
using AI models. They are becoming much more mainstream. Y'all

(43:31):
might recall that there was a bit of an outcry
earlier this summer in July when Vogue ran advertisements for
guests featuring AI generated women selling the brand summer collection.
So I kind of like, shean using Luigi. That's hilarious,
And I think there was a whole conversation about how
handsome he was, which like, I don't know, like I

(43:52):
won't get into the effics of that, but it doesn't
surprise me that that happened, and I think that's really funny.
But in general, the use of AI models in fashion
really troubles me, just as a person who remembers being
a young girl at the height of top model. You know,
they would have a they would have a woman who

(44:13):
was a size four, and Tyra would be like, well
as a plus size fattie, Like young me was like, oh,
she's that.

Speaker 4 (44:20):
Oh.

Speaker 1 (44:20):
Like the way that fashion and modeling just really gets
in your head as a young person is already bad enough,
but if you can use AI to depict people who
aren't even real, I can only imagine what that will
do to beauty standards, how that will shape the next
generation of folks coming up behind us.

Speaker 2 (44:42):
Yeah, it is a great example. I think of how
in a lot of cases, AI is not inventing new problems,
but it's taking existing problems and supercharging them and making
them so much more harmful. Now with models that are
just completely AI generated, free from the constraints of biological plausibility,

(45:09):
it's it's ridiculous.

Speaker 1 (45:12):
This is gonna sound so silly, but I remember a
couple of years ago when legislation was passed saying that
mascerra commercials and advertisements could not use false lashes. So
for the longest time, you could just pop some false
lashes out of a box onto a model and then
lie to people and say this was the this she

(45:34):
got these lashes from our mascara, And it seemed like
such a small thing. But I remember thinking, obviously, you
shouldn't be able to do that if your whole thing
is trying to get consumers to buy your mascare because
of what it can do to your lashes. I just
remember thinking, obviously they should not be able to do that.
But I just think there's no end to how technology

(45:56):
is going to be used to lie to us and
really promote things that aren't real, you know, the fact
that they've been doing it so long already. I think
that AI is just going to be used to make these,
as you said, make these existing problems so much worse,
worse than our wildest imaginations can probably even conceptualize.

Speaker 2 (46:17):
I also wonder how effective these advertisements are because, like,
if we think about mascara and eyelashes, would you be
interested in buying a mescara brand based on an ad
that is that, you know, is AI generated?

Speaker 4 (46:37):
No?

Speaker 1 (46:37):
And that's the thing I don't understand, you know, I
think we're getting so far from what the point of
some of these advertisements is, right, And at a certain point,
I got to see how the genes look on a human,
you know what I mean, I gotta see how the
mascara looks on lashes. We're sort of so beyond what
these advertisements and my mind should be doing. It's like, oh,

(46:58):
we're selling a lifestyle, we're selling an idea, having a feeling,
But really, how do the gene spit? Right? Do they
work on a human body?

Speaker 2 (47:06):
Yeah? Yeah, that's the question, Like does it matter whether
you're communicating how the genes fit or is it all
just marketing? You know, selling a feeling, selling a bye,
like like cologne commercials and perfume commercials, which are the
most unhinged, like conceptual fever dream pieces, because like, how

(47:32):
are you gonna reproduce ascent on a TV?

Speaker 1 (47:36):
You're not obviously people running down a beach with flowing
garments in black and white, Mike.

Speaker 2 (47:44):
Yeah, with like pegas eyes swirling around a mountain and
like you.

Speaker 1 (47:49):
Know this is this is such a tangent Gucci. I
feel like they're like fragrance commercials are the biggest offender
of that, where it truly will be pegasus is running
in the wind and hair being blown. Maybe they'll put
some words in there like eternity forever.

Speaker 2 (48:07):
Yeah. Right, they're like early adopters of this where their
ads are already like CGI animation.

Speaker 1 (48:13):
Yeah. I mean, maybe I'm Pollyanna and none of this matters,
and none of this has mattered for a very long time,
and I just need.

Speaker 2 (48:19):
To catch up maybe or maybe that works for frequances,
but actually is not what people want for products like
genes and makeup that actually need to like work on
their own body. I guess we'll see.

Speaker 4 (48:35):
I guess we'll see more. After a quick break, let's.

Speaker 1 (48:51):
Get right back into it okay, I have a little
bit of bad news good news on the Aborne front.
So I want to start with some bad news. Texas
lawmakers voted to enact House Bill seven, ushering in sweeping
new restrictions on abortion pills mailed to the state. This
bill is now awaiting Governor Greg Abbott's signature as of today, Friday,

(49:14):
September fifth. So this at the center of this is
what is called tele abortion, which is where you have
a consultation remotely, sometimes over zoom, and then abortion pills
are mailed to your home. Tele abortion is great. It
is often much easier to do a pill based abortion
at home. It's also very safe. Serious complications occur in
less than point three percent of people who use it.

(49:35):
But that piece tele abortion is what is at the
center of all of this in Texas. So one scary
piece of this new bill is that it gives private
citizens the right to sue health providers for mailing, prescribing,
or providing abortion medication to patients in Texas. Providers sued
under this bill risk penalties of at least one hundred
thousand dollars. Further, this bill will also authorize lawsuits against

(49:58):
pharmaceutical manufacturers if they make medications that are then used
by Texans for abortion. This obviously could make abortion pills
harder to access in general, not just in Texas, if
any pharmaceutical manufacturer can be sued if those pills are
mailed to Texas. This is from the nineteenth Texas is
the largest state to have almost completely outlawed abortion, but

(50:19):
thousands of residents each month have continued to terminate their
pregnancies by ordering medication from healthcare providers who practice in
states where the procedure remains legal. Those medical professionals work
under the protection of shield laws statutes that hold that
their home states will not cooperate with out of state prosecution.
One estimate suggest that by the end of twenty twenty four,
more than three thousand, four hundred Texans received telehealth abortions

(50:41):
each month. And it's just a good reminder. Tele abortion
is not just about convenience. It's about access, it's about safety,
it's about dignity. For people in rural communities especially, it
can mean the difference between getting essential life saving care
and going without. And efforts to restrict abortion gerally, but
especially tele abortion do not make anybody safer. They just

(51:03):
put everybody more at risk.

Speaker 4 (51:05):
Now.

Speaker 1 (51:06):
I don't want to just leave y'all with bad news
without a little bit of a good news chaser, which
is that Governor J. B. Pritzker signed into law HB
thirty seven oh nine, making Illinois the fourth state in
the first in the Midwest to require public colleges and
university ensure that students have convenient access to medication abortion
and contraceptives. In doing so, Illinois joins the expanding group

(51:29):
of states for re acquiring that student health centers offer
abortion pills. California was the first in twenty nineteen, followed
by Massachusetts in twenty twenty two and New York in
twenty twenty three. This new law, signed on August twenty second,
requires that public colleges and universities provide access to abortion
pills through student health centers, telehealth, or licensed external providers,
and also requires that campus pharmacies dispensed prescribed contraceptives and

(51:52):
medication abortion starting in the twenty twenty five twenty twenty
six school year. And this move is really thanks to
the diligent organizing of students. I feel like I'm kind
of patting myself on the back a little bit here
like twenty years later, because I got my start in
reproductive rights campus organizing when I was a student at

(52:12):
East Carolina University. Let me tell you, in North Carolina
circa two thousand and three, two thousand and four, that
was some lonely work.

Speaker 2 (52:20):
What was your campaign? What were you trying to achieve?

Speaker 1 (52:23):
On paper? It was to protect abortion access in the state,
especially for students, like in my heart, just make the
campus a more feminist, radical place. You know, like a
lot of campus organizing, it was not always the most
laser focused work. I guess I'll just put it that

(52:43):
way if I'm going to speak diplomatically, but abortion access,
access to contraceptives on campus was a big part of
what we were pushing for. And so yeah, this this
move is really because of student organizers on campus. Missed
reports that in twenty twenty four, students at University Illinois
or Banish Champaign past a referendum urging student health centers
to make medication abortion available for students. Nearly three quarters

(53:07):
of the six three hundred and fifty four student voters
supported this idea. Student leaders Emma Darbrough and Grace Hosey
testified before the legislative committees to push HB thirty seven
H nine forward, advocating for accessible and affordable reproductive health
care for college students. Then they teamed up with Representative
Barbara Hernandez and State Senator Selena Villanueva, who then sponsored

(53:29):
this bill in the legislature. And I guess I wanted
to include that because it's just a good reminder that
as bleak as things are, organizing still works. Your voice
still matters, especially if you're a young person student. Organizing
is still powerful, even if it can feel like you're
shouting into avoid sometime. And truly, this is how we win,
This is how we keep each other safe.

Speaker 2 (53:49):
Yeah, organizing is the way to do it. I appreciate
the good news chaser because things do seem pretty bleak
at times. Time flake right now, but it is really
nice to be reminded that students are organizing and positive

(54:12):
things are apt.

Speaker 1 (54:14):
Amen. Okay, So lately I have been really feeling like
we're back to witness the pop of the AI bubble,
and one kind of small indicator, but an indicator, nonetheless,
is how much people are just not interested or excited
about AI in their consumer products. According to a new
c NEET survey, just eleven percent of US smartphone owners

(54:37):
chose to upgrade their devices because of AI features, a
seven percent drop from a similar survey last year. Further,
three in ten people don't find mobile AI helpful at
all and don't want more AI features added to devices. Preach, Wow,
where do you fall on this? Because I have a
specific orientation when it comes to new devices in general,

(54:59):
But you seem like somebody who would be into the newest,
the newest and best.

Speaker 2 (55:04):
I feel like, uh, I don't need to have the
newest and best. But there's some there have been some
pretty impressive advances in like cell phone camera technology in
recent years. Like that's particularly what I'm interested in. The
cameras just keep getting better and better and like noticeably better.

(55:28):
You know, there's so many products where there are like
the newest model has all of you know, it's better
in all these ways, and it's just imperceptible how it's better.

Speaker 4 (55:39):
Uh.

Speaker 2 (55:40):
You know, people are saying that about the AI model
that all the big companies are putting out, like you know,
the new chatch EPT model. People are like, uh, it's
not really that much better. Uh, But cameras and cell
phones like you take a side by side photo taken
on a cell phone that is like cutting edge today

(56:02):
versus cell phone that was cutting edge three years ago,
and it is immediately clear that like one of these
cameras is superior to the other.

Speaker 1 (56:12):
That actually speaks to what people actually do say that
they care about in their phones. They don't want AI features,
they don't want AI bells and whistles. What they care
about is pretty simple, core obvious things. Thirty percent of
people want a nice camera as their top priority of
deciding when to buy a new phone, Sixty two percent
said price, fifty four percent said longer battery life, thirty

(56:35):
nine percent said storage. So it's these like basic obvious,
common sense things that people want. But what's funny is
how much of a mismatch this is between what people
say they want and what companies are giving them, because
companies are still like, oh no, no, you want AI,
you want AI tons of AI features. I think these
survey results really highlight a kind of mismatch of like

(56:55):
what people actually want and what these companies are giving them.
And I think it's general apprehension toward AI, which I
think we're seeing more and.

Speaker 2 (57:03):
More yes, And I don't even think it's entirely apprehension based.
It's just like I don't want this. Like I can't
tell you how many different pieces of software I use
that are constantly nagging me to use some sort of
AI feature that I'm just not interested in using. Adobe

(57:23):
is one of the worst offenders. Microsoft is constantly anytime
I try to do anything, it's like, oh, do you
want copilot to do something here? And I don't. And
most of the time when I try, it like doesn't work,
and I'm like, oh, that was a waste of a minute.
They're just I hope that bubble is bursting, not the

(57:47):
like the entire AI everything bubble where like no one
will ever use AI again, but this dranged era we're
in where tech companies are just trying to force it
into every quarter of every product, in every aspect of
our lives, where it just has no business being and

(58:09):
is not useful.

Speaker 1 (58:11):
Yeah, I think apprehension is not the right word. Perhaps
it's fatigue and I'm so sick of signing on to
something and it's like, oh, do you want to use
our AI tool for this? What if it took longer
and also didn't work that thing that you're doing. Wouldn't
it be good if it like didn't work?

Speaker 2 (58:26):
Yeah? Do you want to ask us a question? And
then our chatbot will take you to a help page
that doesn't actually do what you want, or if.

Speaker 1 (58:37):
You're a minor, it will take you by the hand
light some candles, lead you to the bed, give you
gentle kisses all over your body. But they're like, I'm
just trying to create this graphic for work.

Speaker 2 (58:47):
Come on, guys, yeah, I'm just trying to create this
graphic for work. How do they end up in a
romantic relationship with a bot?

Speaker 1 (58:54):
I mean this is this is People are often surprised
to find this out about me because I make a
tech podcast, So people assume that I have the latest
newest gadgets, and then I love sassed with gadgets. Mike.
Tell them what kind of phone I have.

Speaker 2 (59:08):
I don't know because all the branding has etched off
from the sands of time. But I do know that
it has a crank.

Speaker 1 (59:18):
I rock an iPhone eleven that I got six years ago.
I have no plans of upgrading. It works fine, it
holds a charge fine. I just you know. My car
is from two thousand and eight. I'm just not somebody why.
It's also, this stuff is expensive. I patently reject the
idea that you need to get a new one every
couple of years, especially if the old one is working

(59:39):
just fine. Y'all can cry my iPhone eleven from my
cold dead hands. Don't need it. You're not gonna get
me out of bell or whistle. The idea of a
new camera on my phone does sound nice, because currently
my camera is essentially inoperable, and as somebody who is
trying to make videos and content for the Internet, I've
had to invest in some work arounds. I'll just put
it that way. So maybe maybe I could use a

(01:00:01):
new phone. But yeah, I'm not. I'm not one that
that feels the need to upbraid all the time.

Speaker 2 (01:00:07):
Wild trajectory of that last statement. You know, like, I
don't need a new phone. My curt one doesn't really work,
and camera is the main thing that people look for
in phones, and my camera doesn't work. But I don't
need a new stock.

Speaker 1 (01:00:22):
It's not the only thirty percent of respondents said a
camera was their top priority. Sixty two percent said price.
And I'm a pinch penny, so I'm I'm in that
sixty two percent, make one mess one hundred dollars Apple
and then we'll talk.

Speaker 2 (01:00:36):
All right, Yeah, well, just just keep rocking it. Your
phone is very small. I will give it that. They've
done so much bigger in the past decade.

Speaker 1 (01:00:44):
And they make pockets on women's pants so small. My
pockets are but so large. My phone can only be
but so large.

Speaker 2 (01:00:51):
Words to live by.

Speaker 1 (01:00:52):
Well, Mike, thank you so much for running through these
stories with me. Where can the listeners keep in touch?

Speaker 2 (01:00:59):
Listeners can and leave us comments on Spotify, they can
email us at hello at tangote dot com, and they
can follow us on social bridget Marie in DC, on
Instagram and on TikTok. And there are no girls on
the internet. On YouTube, we've been posting videos lately, trying

(01:01:19):
to make YouTube happen. It's a new thing for us,
but I feel like it's kind of fun. It's interesting
to be working in video, and I would love to
hear from listeners how what they think of it.

Speaker 1 (01:01:32):
That was a very diplomatic way of putting it. I
hate I'm trying so hard, and honestly, people should write
in because I'm curious how people feel about video. Apparently
a lot of people are getting their podcast content from YouTube.
I don't know if this is YouTube suits telling me
very convenient information for them and I and I'm just
believing it. But at any event we're trying on YouTube,

(01:01:54):
I find it to be kind of an undignified medium.
In order in order to get traction, you have to
make thumbnails where your face is doing things that you
don't like your face to be doing.

Speaker 2 (01:02:07):
Yeah, our listeners don't want that. Our our listeners are
busy people. Our listeners are doers. They want to listen
to a podcast while they're doing something else. They want
audio only. They're not trying to sit down and like
watch an hour long video of the same podcast they
could have just listened to while they were like doing

(01:02:29):
the dishes, or doing the laundry or driving somewhere. Our
listeners are our people. But yet it does seem I
have heard the message that there are other people out
there hungry for the message of Tangote who only listen
slash Patch on YouTube.

Speaker 1 (01:02:50):
And well, I know, I know we're trying to wrap,
but you really have triggered something in me. So I'm
going to keep going for a moment. If you'l if
you'll allow me, which is that I so I love
podcast content. I'm an audio girl. I'm came from a
background of in a house where public radio or MPR
was always on. And my thing is this as we
are being told that video is the thing, it has

(01:03:13):
echoes of the great pivot to video that was a
big scam. And also I'm finding more and more audio
creators making podcast content that I think is now prioritizing
the video experience. And so a podcast that you love
and you've been listening to it in your earbuds for decades,
now that podcast is prioritizing the video consumer experience. And

(01:03:37):
they'll be referencing things that you can't They then don't describe,
and you're like like, we'll show things on video and
you're like, I don't even know what they're talking about.
I can promise you this podcast will Pangodi will never
do that. We are audio first, audio first, forever. Video
will be supplementary. We're trying to get there for video people,
but AO audio rules in this in this community.

Speaker 2 (01:04:00):
Yes, absolutely. The the videos that we're making are a
different thing from the podcast. You know, we talked about it.
We were like, oh, should we just record the podcast
and then like throw that up on YouTube. No, that's
not what we're gonna do. That's not what we want
to do. That's not what we're gonna do. So we're
still trying to figure out what are the videos that

(01:04:20):
we put up there. Hopefully they can be you know,
a good compliment to what we're doing with the audio.
But listeners, thank you for sticking with us on audio.
Do know that audio is like under siege. I don't know,
maybe that's too strong.

Speaker 1 (01:04:36):
But like no audio is under attacked.

Speaker 2 (01:04:38):
Yeah, so keep listening, tell your friends, share the shows
that you like. Hopefully we're in that set, but even
if we're not, you know, protect the audio audio forever.

Speaker 1 (01:04:50):
Thanks so much for listening. We will see you on
the Internet. Got a story about it, interesting thing in tech,
or just want to say hi? You can read us
at Hello at tangody dot com. You can also find
transcripts for today's episode at tengody dot com. There Are
No Girls on the Internet was created by me Bridget
tod It's a production of iHeartRadio and Unbossed creative. Jonathan

(01:05:13):
Strickland is our executive producer. Tari Harrison is our producer
and sound engineer. Michael Almato is our contributing producer. I'm
your host, Bridget Todd. If you want to help us grow,
rate and review.

Speaker 4 (01:05:24):
Us on Apple Podcasts.

Speaker 1 (01:05:26):
For more podcasts from iHeartRadio, check out the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.