Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, this is Annie and Samantha. I'm welcome to Steffan
Never Told You prodictre of iHeart Radio, and today we
are once again so happy to be joined for the
first time in the new Year by the fabulous, the
fantastic Bridget Todd.
Speaker 2 (00:25):
Welcome Bridget, Thanks for having me. Happy New Year.
Speaker 3 (00:30):
This is my first, I think my first podcast recording
of twenty twenty six. So if I sound a little rusty,
that's what's going on.
Speaker 1 (00:37):
You're in good company. You're in good company. And also,
as we were joking about in that sad way before
we started recording, you picked a doozy of a topic
to come out swinging with, which we'll get into more
in a second. But how have you been, Bridget, How
has the New Year been? Etc?
Speaker 2 (00:58):
Yeah, the New Year has been.
Speaker 3 (00:59):
I know how if other folks are feeling this, it's
just still been a little bit of a lot. Like
I don't know, I am someone who I think I've
said this on this show before. I like New Year.
I like all the sort of ra rah this year
is gonna be different, fresh start. That stuff really works
on me. Even I didn't do that this year. I
didn't even buy a planner and a bunch of pens
(01:21):
to like sit and rot in my tone, I know
you love pens. I love buying pens I won't use,
and buying a planner I'll use for a week and
then forget about.
Speaker 2 (01:30):
I didn't even do that.
Speaker 4 (01:32):
That makes me sad.
Speaker 1 (01:33):
Yeah, it's we were. We had an episode on this recently.
I don't think I said this in the episode though,
but I I'm very meticulous about my calendar and planning
and I have not even updated my calendar to January.
I I have made a note to do it today.
We shall see if it happens. But yeah, it's just
(01:57):
felt something off or something strange about the whole thing.
It doesn't feel like a new year to me. It
just blurs all together. I'm not sure, but maybe the
calendar will help me.
Speaker 3 (02:13):
Yeah, that's how I'm feeling too. About Where did you
put that note?
Speaker 1 (02:16):
If not on a calendar, I have a little sticky
digital sticky note on my desktop.
Speaker 2 (02:23):
Oh you better'll think it out.
Speaker 1 (02:25):
I mean I don't think so, but I do have
backup plans. What about you, Samantha.
Speaker 2 (02:33):
That is amazing. Yeah, how's your new year? Sam?
Speaker 4 (02:37):
Again, I think that's why I'm being silent because I'm like, I.
Speaker 2 (02:39):
Don't, don't.
Speaker 4 (02:41):
We're here, We've made it to this year. So you know,
as the older generation would say, blessed to be around
for another year. You know, the alternative is dark. But
these reports have gotten me. That's the words I think.
I like everything. You know, we talk about the millennial freeze,
(03:04):
the h the reactions that everyone is having Millennials slash
uh for me, excellennial who have gone through all of
the chaos in our lifetimes and then coming in having
that small slither of hope will be like, yeah, maybe
this will be a better year, and then the year happens,
you're like, maybe it's going to be the same. Maybe
it's just going to be the same.
Speaker 2 (03:25):
So I think part of it for me, I felt
the exact same thing.
Speaker 3 (03:28):
And I think part of it for me was that,
you know, over the over the holidays, you're sometimes you're families,
you're not checking the news as much. When I checked
back in, I was like, oh, let's see what's going
on in the world.
Speaker 2 (03:40):
The world was on fire.
Speaker 3 (03:41):
It felt like, oh, so it's sort of hard to
start the new year on a hopeful note, feeling like
we might have a chance for stability and feeling a
little bit better than being like, oh, just kidding, and
and it's gone right.
Speaker 4 (03:55):
Like I feel like for a little while we had
the main hope of being like, we're the bad guys,
but we're not the absolute main villain. And as we
see things are like, oh, we've become the main villain,
like international main villain. Oh that's interesting, that's the new Ones.
Speaker 2 (04:11):
And we are.
Speaker 4 (04:14):
Anyway, That's how I'm doing. And I laugh, and we
are in this episode and I'm so glad we get
to be educated and distracted by you today.
Speaker 3 (04:24):
Yes, well I hope that's I don't know if it's
going to make anybody feel happier about our situation, but
you know, sometimes you just have to have a clear
eyed understanding of what's happening and just deal with it.
Speaker 4 (04:40):
Just deal with it.
Speaker 1 (04:42):
Yes, And it is really important that we talk about it.
And Samantha and I have been discussing how we've noticed
a real ramp up in very very destructive online behaviors
and so as difficult as a conversation like this is,
(05:02):
we do need to talk about it. So, yes, what
are we discussing today?
Speaker 3 (05:08):
Yeah, so I just have to give a warning because
the whole conversation is going to be pretty grim and
pretty dark. So just a trigger warning upfront. But I'm sure,
as we all know by now, we're in a good poet,
mother wife. She was shot and killed by ice in Minnesota,
right and I watched the video was horrifying. The response
from the administration has been also predictably horrifying. The conversation
(05:32):
online probably predictably has been just a wash in lies
and distortions about both Good and then what happened to her.
Speaker 2 (05:42):
All of these things, I feel, we're.
Speaker 3 (05:43):
Pretty at this point are pretty predictable when this kind
of thing happens. Unfortunately, however, something that I did not
and could not predict, and maybe I should have, is
that when I was scrolling X, I saw a screenshot
from the video of Good being shot and killed. So
it's a video of her, an image of her dead
(06:05):
body covered in blood, and somebody under the image asking
GROC to generate an AI version of that image that
puts her in a bikini. And guess what, GROC complied
and generated an AI image of Good dead, covered in blood,
but wearing an AI generated bikini. And do you know
(06:28):
those things that you see where you're like, that's it.
I have hit my limit. This is that's enough Internet
for me. I genuinely had to log off and go
for a walk outside because something about that I think
just I don't have the words for that kind of
depravity that it showed, like it just really I'm not
(06:52):
bummed out by a lot on the Internet these days,
but this really bummed me out.
Speaker 1 (06:57):
Yeah, that legitimately unders standably so that it is incredibly upsetting.
Speaker 3 (07:02):
And I'm sorry to say that this is not even
an isolated thing because when you go to X and
I should say, up off the top, I'm not really
spending a lot of time on X. I still have
an account there, but when you go to my account,
it says please see pen tweet, and then the pen
tweet is like, I'm not on X, so please don't
send me anything here because I'm not gonna see it.
(07:23):
But if you were to go to X and you
type in put her or make her, you will see
how many creeps are posting under images on the platform
of women and children, in some cases asking groc x
is ai to generate that person in a bikini or
some otherwise sexually suggestive thing or pose like sometimes they
(07:45):
will ask, oh, make this person look heavily pregnant, like
things like that. Some of the things I've seen in
the last few weeks are things like Groc being asked
to undress an image of a fourteen year old actress
on the platform, which Groc did. I think I think
it's important to say up top that we were talking
about non consensual AI generated sexualized images of women and girls. However,
(08:11):
like that Renee Good photo I talked about earlier, it's
also just like dark. It's not it would be bad
enough if we were talking about just sexualized images of
women and girls without their consent, that's bad enough.
Speaker 2 (08:25):
We're talking about like.
Speaker 3 (08:26):
Dark, hateful, sexualized images of women and kids. Right like
Groc accepted a prompt to add a swastika bikini to
a photo of a Holocaust survivor, it will, you know,
accept prompts to make women look like they've been beaten up,
things like that, Like, not just regular sexualized images which
are already bad, much darker, weirder, more depraved stuff.
Speaker 4 (08:50):
I'm just the linking of this type of content and
the usage of AI. Once again, when we had our
last episode with you. We talked about all of these
things inevitably being a part of the in cell red
pill community. It just this is an action as we
are watching it unfold, and people actually not having obviously
(09:12):
for those of us who know the consequences and what
this is showing evidence of, and any person of good
human humanity understands that this is gross but not really
paying attention of understanding where this is coming from, and
the normalization of accepting things like this like this is alarming,
(09:32):
like nauseatingly alarming.
Speaker 2 (09:36):
You really put it so well.
Speaker 3 (09:38):
I think it's not just that this kind of thing exists,
because okay, it's the Internet. Of course, there's dark corners
whe people are doing dark things. X is one of
our largest social media platforms. It's not you know, I'm
not there and a lot of people have left, but
it's not a niche platform. There's still lots and lots
of people who show up there. We're not talking about
(09:59):
on alternate of space like a telegram. This is one
of our biggest communications platforms, and this is happening openly
and publicly, and in a lot of cases, the person
who runs that platform, Elon Musk, is laughing about it.
Joking about it, seemingly celebrating it. And so I'm glad
that you brought that up. That it's not just that
this is occurring, it's the normalization and the way that
(10:21):
it is being tolerated in our mainstream online spaces. And
really it's not just what's happening on X because Grock
is also available as a standalone app in addition to
being something that you could find on X, And as
the Outlet Indicator points out, the standalone Grock app has
been used to generate sexual imagery of kids aged eleven
(10:43):
to thirteen.
Speaker 2 (10:44):
This is according to the Internet Watch Foundation. So yeah,
it's just.
Speaker 3 (10:49):
A gross situation all around.
Speaker 1 (11:02):
And for any of our listeners who might be blissfully
unaware what is groc Bridget.
Speaker 2 (11:10):
I'm glad that you asked.
Speaker 3 (11:11):
So for folks who don't know, or maybe you've left
the platform before Grock became a thing like me, Groc
is x's AI chatbot. But something to know about Groc
is that Groc is with intention different than other chatbots
that you might be familiar with, like chat geet BT
or Claude, because Groc has been designed to basically be
(11:34):
like an edge lord chatbot there's no other real way
to put it. Because we know Elon Musk is a
loser creep. Groc has basically been been designed for loser
creeps by loser creeps, right, and so it is absolutely
true that other chatbots like chat Gept and Claude absolutely
have their issues. Groc, however, is distinct for being uniquely
(11:57):
awful in ways that are intent to really built in
for when people use it. And I guess I wanted
to start with Groc because folks might have seen headlines
that say some version of Groc has apologized for generating
sexualized images of young girls, and I wanted to start
(12:18):
there because I just want to make it very clear.
I didn't know that this was something that needed clarifying,
but just in case it does, Groc is not sentient.
Grek is not a human. Groc is technology. Groc's not
a person. So Groc doesn't do anything. Humans like Elon
Musk built Groc, They trained Groc, they run it as
(12:39):
a commercial service, and they make it available for other
humans to do things with. And right now, what those
humans are doing with Groc is undressing women and children
without their consent to create sexualized images, also known as
here in the United States crimes criminal activity.
Speaker 4 (12:56):
And as a reminder, when Musk he literally kind of
went on a platform about making sure to get c
SAM or child sexual expectation content off and protecting the kids.
Speaker 2 (13:09):
By the way.
Speaker 3 (13:11):
Yeah, so I don't remember if I talked about this
with you all or not, but that one of his
big but one of the big things that he did
a lot of grand standing about was that we're gonna
wipe child sexual assault material off the platform.
Speaker 2 (13:24):
And then he really didn't do that at all.
Speaker 3 (13:29):
Right, Like, one of the first things that he did
when taking over Twitter was to fire I think eighty
percent of the staff responsible for building tools that look
for and combat and stamp out sexualized material involving children.
Speaker 2 (13:45):
And so yeah, he sure didn't, but he sure loves to.
Speaker 3 (13:48):
I mean, he doesn't really grandstand about it that much anymore,
because anymore, how could you? But yeah, that was his
tenure at Twitter began with a lot of grand standing
and then doing nothing.
Speaker 4 (13:59):
Right. He literally did this because of the election. And
as we know, the right wing love to scream about
protecting children, protecting women, protecting babies, but not in actuality.
They just love that sentiment and it works for me,
and I've talked about this, We've talked about this on
the show often how it does hone in on women, specifically,
(14:21):
making sure if you don't stand for these things, then
you are a monster that hates children, that wants to
kill all children. And eight children by the way, that
was also you know, we love the pizza Gate, which
still happens, we know, but that was that beginning. And
then once Grok was released, if I remember correctly, it
actually worked okay in that it would call out misinformation,
(14:43):
but when the misinformation was correcting the far right, I mean,
they were like, oh no, no, I got to fix this.
Speaker 3 (14:49):
Yes, that's exactly what happened. So well put for folks
might remember that Groc was not being biased enough and
was saying things that were we know to be true,
and then Elon Musk himself stepped in and said no, no, no, no,
I got We're working to make GROC less woke that
at that point Grok started calling itself Mecha Hitler went
(15:11):
down a.
Speaker 2 (15:11):
Real weird rabbit hole on that one. Uh yeah, but
that is exactly what happened.
Speaker 4 (15:17):
I just remember this. And then, as we know, since then,
all of those green standing outside of making sure that
the right is not censored because you know, we don't
want that anything else. It seems like, yeah, they have
kind of been the beginning of all of this nastiness.
Speaker 3 (15:37):
Yeah, and I'm glad that you said that, because I
want to back up a little bit, because you know,
we're talking today about GROC rightly so, but it didn't
really start with Groc. Like X has been a hotbed
of this kind of material for a while. Like folks
might recall that the teenage Marvel star Social Gomez was
(15:57):
targeted with deep fake images that flood X and this
happened when she was only a minor, she was seventeen,
and she's spoken out about this publicly that she basically
was just told there's nothing that can be done. You
just have to make peace with the fact that these
images are out there. Back in January of twenty twenty four,
Taylor Swift was a target of the same thing. AI
(16:17):
generated sexualized images of her flooded the platform. Those images
actually originated on an alternative social media platform called Telegram.
At the time, Telegram had this channel that was essentially
kind of a marketplace for celebrity deep stakes, where users
would like request and trade images that they had made
of celebrities.
Speaker 2 (16:37):
Those images then made their way to X where they
really took off.
Speaker 3 (16:41):
Like Telegram is still pretty niche, but once those images
went from Telegram to X, that's when they got a
lot more visibility. When this happened, in my opinion, I
don't think that X handled it very well, which I
think we can really see as a precursor to how
we got to where we are right now with it. Initially,
X their way to deal with it was to try
to block people from being able to search Taylor Swift's name,
(17:04):
which I think is not a real fix anyway. But
when you put her name in quotes, you still could
search her name, and so obviously that did not work
as a fix. But also that solution would only work
for women named Taylor Swift, right Like, It's not any
kind of like meaningful way to combat the actual problem
on the platform. It might just be preventive to have
(17:27):
people search Taylor to Swift, but even not really at that. So, yeah,
did not handle it well. And so this was happening
before grock was a thing and then but back then,
these images would start on niche or alternative platforms and
then go to Twitter to get reach in visibility, then
inter Grock, which essentially allows users to use natural language
(17:50):
to ask Groc to generate whatever they want to see
women and girls do. So you could just say, hey,
Grock do xyz. Grock's going to do it now again, Sam,
As you rightly pointed out, keep in mind that when
Elon Musk took over at Twitter, he very early on
dissolved Twitter's Trust and Safety Council and fired eighty percent
of the engineers working on child exploitation issues. This is
(18:12):
from a really talented journalist at spitfire News, Cat Tenbarg,
who's been writing about this for a really long time.
So you know, Musk comes into Twitter, he fires the
people who are working on this. He knows that they
already have this issue with non consensual sexualized images on
the platform and.
Speaker 2 (18:29):
Are not really handling it well.
Speaker 3 (18:31):
And yet over the summer, X rolls out what it
calls Spicy Mode for Rock, which is going to be
a way to let people generate images in video to
produce sexual content. And so I want to put it
that way because to me, none of this stuff is
happening in a vacuum.
Speaker 2 (18:48):
Right.
Speaker 3 (18:49):
These are very clear decisions that I would say, it's
very foreseeable. What kind of results decisions like these are
going to.
Speaker 4 (19:01):
You know, with all of the KOSA stuff happening and
the banning or the exclusionary stuff, with things like porn
Hub and all of that, this is interesting. I'm not
trying to be conspiratorial here, but on the far reaching
of the conspiracy side, this seems like a really interesting
(19:23):
monopoly for sexualized content.
Speaker 3 (19:27):
I've been racking my brain because this just seems weird, right,
it's just.
Speaker 2 (19:31):
A very yeah. I mean, and again, I just I
can't wrap my head.
Speaker 3 (19:37):
Around living in a world where miners aren't allowed to
talk to other miners on social media because that's dangerous,
but they can be undressed and sexualized by GROC and
that's fine.
Speaker 2 (19:52):
Like, I just I can't square that circle if we will.
Speaker 4 (19:56):
Be with all of the porn child porn laws, which
is pretty explicit like about the laws and like children
can be charged with it just by taking pictures of
themselves and having it in their phone, that is childborn.
I have seen kids being charged with child porn.
Speaker 1 (20:11):
By the way.
Speaker 4 (20:12):
Wow, So it's interesting, uh huh is interesting to see
this where they're like, but it's not real. We didn't
harm a kid. We just took their likeness and then
dot dot Yes.
Speaker 3 (20:26):
So I wasn't totally sure about this, so I looked
it up and according to Enough Abuse, which is an
anti child sexual abuse organization, forty five states in the
United States have enacted laws that criminalize AI generated or
computer edited child sexual assault material, while five states and
the districts of Columbia where I live have not as
of August twenty twenty five.
Speaker 2 (20:47):
Right, and so you're exactly right.
Speaker 3 (20:50):
One of the I guess criticisms or pushback sometimes I
will hear is like, oh, well they're not actually harming
real kids, it is still a real crime.
Speaker 2 (21:00):
So that's all I can say.
Speaker 4 (21:03):
Yeah, I don't want to jump too far ahead, but
I'm like, the alarm bells are ringing over.
Speaker 2 (21:10):
Here, as they should be.
Speaker 1 (21:13):
Yeah, And I think also calling something spicy mode is
so unseerious, Like they're treating this so unseriously of like, oh,
this is just a fun and I think it's a
very convenient excuse of being like, well, it was the AI.
But like you said, there are people behind this that
created it this way. But I think they're using it
(21:35):
very conveniently for them to be like, I don't know,
it's just spicy mode.
Speaker 3 (21:40):
Yeah, yeah, And I want to kind of underline that
because we have to. We can't not mention that X
is run by Elon Musk. And I'm sorry, but this
is just part and parcel of what Elon Musk is about.
Before I got on with you two, I was like, Oh,
what's Elon Musk talking about. He was posting on X
(22:02):
jokey pictures that showed a toaster in a bikini, So
not taking this seriously at all. And I think we
have to say Elon Musk has been a really toxic
decision maker for a long time, and I can't help
but wonder if if some of the people in power
and people in media, if they had not treated this
like oh, just the acceptable quirkiness of a brilliant genius
(22:26):
and rather chaotic, volatile leadership decisions that are bad for business.
I wonder if we would not be in this situation
where somebody like Elon Musk feels totally comfortable talking about
this in a way that makes it clear he thinks
it's a joke and it's not serious to him. The
whole thing is like very unseerious.
Speaker 4 (22:44):
I mean, let's this is gonna be if one day
someone gets a hold of our episodes that don't like us,
this is gonna be a bad take. But let's just
for a minute, imagine what would have happened if someone
did those with a Charlie Kirk picture of this death?
Speaker 2 (23:01):
Are you? I mean, like yeah, it would be I mean,
don't even get me sing why.
Speaker 4 (23:08):
The people went off on just quoting his words as
being offensive? Yeah, could you imagine.
Speaker 3 (23:15):
I could not write And I don't think we would
ever see that, like like it just it's yeah, And
you know, it's the scale, the scale of the issue
on X. It's it's really kind of mind boggling how
much of this content is flourishing there right now. Copy Leaks,
(23:37):
which is like a content analysis firm, they reported on
December thirty first that X users were generating roughly one
non consensual sexualized image per minute.
Speaker 2 (23:48):
That is so many images, that's so.
Speaker 3 (23:51):
Much and not to like even think about the environmental
impact of this, like I had, like I need to
see this miner in a bikini is worth the environmental
impact of doing this one image per minute? That something
about that I just it's hard for me to wrap
my head around.
Speaker 4 (24:08):
That people have the goal to do it. This is
a form that used to be used for social and
political conversation.
Speaker 3 (24:19):
Yeah, now this, and you know, people like Elon Musk
love to talk about X or Twitter as like, oh,
it's the digital town square where people can enter the
marketplace of ideas. Okay, Well, as a woman, I'm not
going to enter a marketplace where my top my kit
yanked off when I go in there.
Speaker 2 (24:36):
So I don't believe that X is a marketplace of ideas.
Speaker 3 (24:40):
But even if it were, if it's not safe for
women to show up, women are not going to be
able to have a chance to engage in this so
called marketplace of ideas, right, It's just ridiculous and so
according to Bloomberg, during a twenty four hour analysis of
images the Grock account posted to X, the chapbot generated
about six seven hundred every hour that we're ident as
(25:00):
sexually suggestive or new toifying. According to Genevieveo, a social
media and deep sake researcher, the other top five websites
for such content average seventy nine new AI undressing images
per hour in the twenty four hour period from January
fifth to January six, oh found. So again, it's not
like X is the only place where this kind of
(25:21):
content shows up. It's on Facebook. It's a problem across
the internet. However, it is very clearly a much bigger
problem in happening on a much bigger scale on X
as compared to other social media platforms. And just to
make something else clear, we've been talking about celebrities and
things like that. To be clear, it is not just celebrities.
(25:42):
It's also this regular, non famous, non public figures who
post their images on X that this is happening to.
Speaker 2 (25:49):
And I feel like.
Speaker 3 (25:50):
This is the part of the podcast where I should
say women should be careful, like don't post your picture
on social media, YadA, YadA, YadA, And I guess I
feel some responsibility to say that. But on the other hand,
women should be allowed to post normal pictures on social media.
I kind of hate this advice that says, like, oh, women,
(26:10):
it's not safe for you to show up here, So
I wouldn't show up in these places. Women not doing
anything wrong, right, We're not the ones who are generating
this kind of creepy imagery. And I just hate that
guidance because it creates a climate where it's just normalized
that women can no longer show up on these platforms
in ways that are safe.
Speaker 2 (26:30):
But of course I have to imagine that's the point.
Speaker 1 (26:32):
Yeah, And it feels very like I used to think
this a lot when I was running, and I would
see advice given towards male runners and female runners, and
it's very it would be very much like blaming you
already for something that could go wrong, like don't put
your hair in a ponytail because a man could grab it,
don't wear a headphones because you won't hear something like
(26:54):
it was making it feel like it was your fault
for whatever might happen to you. And that's it's so
unfortunate because I agree with you, Bridget. It's one of
those things where I'm like, yes, I sometimes feel like
I have to give this advice, but it feels so
horrible to give that advice because it's not your fault
(27:16):
and it shouldn't be this way.
Speaker 4 (27:18):
Right, Yes, it's that cautionary tale that you have to tell,
which you're like, but it's not your fault. You didn't
do anything wrong. But because this is the state of
the world and everything is awful. Let me tell you this,
and if something does go wrong, you're going to remember
that in your head and say what could I have done?
Which is also another layer that again is not your fault,
is not our fault, but because that's how it is
(27:40):
laid out, it's almost a trap for you to blame yourself.
Speaker 2 (27:46):
Yeah.
Speaker 3 (27:46):
So I was scrolling around social media just trying to
get a sense of like what people are saying, and
there was an image that somebody had taken off of
a woman's LinkedIn, and then somebody asked roc like, oh,
put it in a bikini, do this, do that? And
then someone was like, well, if she didn't want this
to happen, she shouldn't, like, like, if you put your
picture on the internet, you get what you get. And
(28:08):
I thought, you can't even have your headshot on a
link on LinkedIn a professional platform without it being like that,
like that, that's not consenting to have your image be
manipulated in ways that are sexual, And I was just really,
I think it really shows how much we have normalized
that women bring this on themselves via visibility, even when
(28:30):
the visibility is totally normal, right, Like a totally normal
picture of you know, a woman's face. If you are
not allowed to post that, then women can't safely show
up on the internet. It's just very frustrating. And any
of your point about being a woman who runs it, really,
I think that's a really good analogy because it is
very frustrating, Like you want to give advice, but why
(28:52):
should you have to get this kind of advice?
Speaker 1 (28:55):
Yeah, and you know, to your point about LinkedIn. Samantha
and I I think last year around this time we
were talking about what was going on on next door
because I was like, what in the world is this?
Because I was seeing people men being like, well, this
woman was asking for me to give her my number
(29:17):
and now she's mad about it. But look at what
she was wearing in this mixture, And.
Speaker 2 (29:21):
I was like, what.
Speaker 1 (29:26):
This is Like, it's everywhere. It is so pervasive and
so insidious that even yeah, just going online, I'll be like, oh,
this must be a safe space. No, never, not that
next door I ever thought was a safe space. But
I was surprised that that was coming up on there.
Speaker 4 (29:45):
We know next door is a hotbed of many things
and messes, including Rachel process, but unlat of drama.
Speaker 2 (30:02):
But yeah, I also again.
Speaker 4 (30:03):
Going with my ringing alarms here because I've become real
conspiratory today apparently. But this just feels like a ploy
may be a part of the Esther project, maybe project
twenty five. I could be just blowing this out of
proportion in that they are trying to eliminate women from
these spaces because we know that being able to access
(30:26):
like LinkedIn and Internet give you accessibility to jobs, to pops,
possibly being able to do content, to be able to
grow your business, to be able to grow your name,
whatever whatnot. And if you shame them and victimize them
to a point, or at least like bully them off
of these spaces, then they we cannot connect. Marginalized communities
cannot connect as we know it. That's how kind of
(30:48):
Twitter blew up is being able to connect with each other.
And it was a space for black women and women
of color to come together and be able to lay
down the grounds for changes that wipes up from just
do not want. So it does feel like maybe they
are doing this purposely. I could absolutly. Even though I
don't think he's smart, I think he's smart enough to
(31:09):
use his evil to being like if we show them
that you're going to be demeaned to nothing but sexual
objects by no, like nothing from you, not because you
posted something, but because we used your post against you,
then we can get rid of you. And therefore you
do not have the same leg up as those who
are able to access this area.
Speaker 3 (31:30):
Oh, I mean this doesn't sound conspiratorial to me at all,
because one only needs to read Project twenty twenty five
to see that pushing women and other marginalized people out
of public and civic life is part of the point. Like,
that's what they're trying to do. And so if women
and other marginalized people are not safe showing up online,
(31:54):
and if they show up online, they either have to
say like, oh, yeah, you might somebody might take your
image and put it in a horrifying AI generated sexualized scenario,
and everybody will be like, well that's what you get.
Come into the internet, even though it's not happening equitably
across all people. Yeah, that does have the effect of
pushing folks out of these spaces. And in twenty twenty six,
(32:14):
you're exactly right. Part of being civically engaged, part of
being economically engaged, is being online. You know that's not
me that's the un saying that, right. And so if
we create the conditions where everybody cannot equitably and equally
and safely show up online, that means that people who
(32:35):
can't show up are not going to be full participants
in their democracy, because that's U's more and more how
democracy is unfolding is online in twenty twenty six.
Speaker 2 (32:42):
And so you're exactly right. It doesn't sound conspiratorial to
me at all.
Speaker 1 (32:46):
Yeah, and I know I bring this up a lot,
but that episode Bridget we did together about journalism in India,
I think about that all the time, where women journalists
were getting sent these deep fakes, these nudes of them
being threatened, like if you print the story, we'll send
(33:07):
it to your family, and that's like the news. That's journalism,
and they're trying to scare women away. And I've been
I don't want to do this episode, but I've been
thinking about doing this episode about how Donald Trump treats
women who are journalists and how he speaks to women
who are journalists. And it feels very similar to what
(33:27):
you're talking about, Samantha. It feels very like, let's just
demean these women, Let's make it so miserable for them
that they go away and we don't have to answer
these questions and we don't have to change anything.
Speaker 3 (33:41):
Yes, I think that's exactly it's exactly the same thing.
And like I it's been you know, as women in media,
you kind of have to have a thick skin, but
nobody should be calling you piggy, right, Like, there's a line.
Speaker 2 (33:57):
And I think the fact that we've normalized this.
Speaker 3 (34:01):
I watched Caroline leave it when Trump called that female
reporter or a piggy. Leva got a question about it, and
she was like, well, you know, we all appreciate that
Trump speaks his mind and says it like it is.
And it's like the fact that that's normalized, Now, what
message does that sin to little girls like me who
wanted to be journalists when they were little kids, right, Like, oh,
(34:23):
you'll have to withstand this level of attacks about your identity,
your appearance, who you are. It just really again, it
just really makes me sick. But I think it is
intentional because the signal is very clear of like, don't
ask questions, don't make waves, because this is what's going
to happen. And let me be real, no woman journalist
(34:46):
wants to become the story because the President called her
a pig. No, no, and so you have to decide, like,
is do you want this to be part of your
career if you challenge Trump? It's just a really impossible
bargain and that women should not be having to make
in twenty twenty six.
Speaker 1 (35:04):
Yeah. Agree. I mean it's one of those things where
I feel like we're in such a backslide. But if
at one point I would have said, in any other job,
a boss calling you that you would get fired. But
now it's like, no, no, just that was a joke.
Let's just move on.
Speaker 2 (35:24):
That's okay, it was barely a blip. I'll tell you
something else. There's just no reality where I would let
a man that looks like Donald Trump call me say
say anything about my appearance. Shout out to the.
Speaker 3 (35:36):
Women that just took it on the chin, because I
would never let a man that looks like Donald Trump
say to me about the way I.
Speaker 4 (35:40):
Look, the pettiness that would have come out of my mouth.
Speaker 5 (35:46):
All oh, oh, oh, okay, Well, going back to grook,
you have some numbers here that are pretty disturbing.
Speaker 3 (36:02):
Yes, So The Guardian spoke to PhD researcher at Dublin's
Trinity College AI Accountability Lab, Nana Nwachukwu, whose research investigated
the different types of requests that users were submitting to GROC,
and she found that nearly three quarters of all requests
were direct non consensual requests for GROC to remove or
replace clothing.
Speaker 2 (36:22):
Three quarters. That is a lot.
Speaker 3 (36:25):
She showed the Guardians some of the different GROC created
photos that she was collecting as part of her research,
and The Guardian confirmed that dozens of them were pictures
of women, including celebrities, models, stock photos as well as
just regular ordinary, non public figure women posing in snapshots.
And so her research really paints a portrait that this
is like an ecosystem where users are not just making
(36:48):
these images, they are also interacting with each other and
like iterating on the different non consensual images that they've
made GROC.
Speaker 2 (36:56):
To make right.
Speaker 3 (36:57):
And so something about that is no to me that
it's not just GROC make this horrifying image.
Speaker 2 (37:05):
It's GROC make this horrifying image. Oh cool image, bro,
I'm gonna make it like this.
Speaker 3 (37:10):
Like it's like they're really building community around this and
they're they're bold enough to do it.
Speaker 2 (37:15):
In public.
Speaker 3 (37:16):
I mean, you know, this kind of thing has been
going on in on alternative channels like telegram, where people
are you know, kind of engaging in community and conversation
about it. But the fact that this is happening in
public is so different to me. And also, of the
several posts that the Guardian saw, lots of them that
got tens of thousands of impressions are coming from premium users,
(37:38):
so blue check accounts, including accounts with tens of thousands
of followers, and so just a reminder, premium accounts that
have more than five hundred followers and five million impressions
over three months are eligible for excess revenue sharing eligibility.
So yeah, just it's not a marketplace of ideas. It's
a marketplace of non consensual images where there is a
(38:00):
financial incentive for people to post this kind of content because.
Speaker 2 (38:03):
If they go viral, they could get paid for it
from X.
Speaker 1 (38:07):
We're making a Marvel Faces listeners. You can't see it,
but oh my goodness, and you know, I know we're
gonna get into this later. Feels legal, It feels feels
wrong and illegal to me.
Speaker 2 (38:21):
Yes, it's me too.
Speaker 3 (38:24):
I mean the animating question that I sort of started
this conversation out with when I set down to plan
out the episode is how is this allowed?
Speaker 2 (38:32):
How is this legal? Is anybody gonna do anything like?
What is going on?
Speaker 3 (38:35):
And again, you know we're talking about AI generated images,
but really this is nothing new because X has also
just been a platform where AI and non AI generated
illegal child sexual assault material doesn't just exist but can flourish.
(38:55):
There's a great piece that I read by one of
my favorite journalists, Samantha Cole from four for Media called
grox AI sexual abuse didn't come out of nowhere. Coll writes,
this is the culmination of years and years of rampant
abuse on the platform. Reporting from the National Center for
Missing and Exploited Children, the official organization social media platforms
report to when they find instances of child sexual abuse material,
(39:17):
which then reports to the relevant authorities, shows that Twitter
and eventually X has been one of the leading hosts
of this kind of material every year for the last
seven years. In twenty nineteen, the platform reported forty five,
seven hundred and twenty six instances of abuse. In twenty twenty,
it was sixty five thousand, sixty two. In twenty twenty four,
it was six hundred and eighty six thousand, one hundred
(39:39):
and seventy six. So yeah, I mean again, this is
a problem all across the Internet, but it does sort
of paint a picture of the fact that this is
much worse on Twitter, and Cole points out that these
numbers should be considered with the caveat that platforms voluntarily
report this kind of content, and more reports can also
mean wronger moderation systems that catch this kind of content
(40:02):
when it appears, but the scale of the problem is
still apparent. So Jack Dorsey, who was the CEO of
the earlier iteration of Twitter, he was not very good.
I don't I'm not gonna like sit here and say
that he was great at moderation. As Cold points out,
Jack Dorsey's Twitter was a moderation clown show much of
the time. But moderation on Elon Musk's X, especially against
(40:24):
abusive imagery, is a total failure. So yeah, not going
well or going very well, depending on you.
Speaker 2 (40:33):
Know what perspective you take.
Speaker 3 (40:34):
You know, if you're a child pornographer, you might be like, oh,
it's actually going great.
Speaker 1 (40:40):
Oh yes, which does bring us back to the question of, yeah,
why is this allowed? Why is this legal.
Speaker 2 (40:53):
Question.
Speaker 3 (40:54):
So this kind of content definitely violates x's own policies,
which prohibit sharing a legal content like child sexual abuse material,
but as a piece for Wired points out, it could
also violate Google's play like app store and the Apple
App stores guidelines Wired rights. Apple and Google both explicitly
ban apps containing CSAM, which is illegal to host and
(41:16):
distribute in many countries. The tech giants also forbid apps
that contain pornographic material or facilitate harassment the Apple App
stores as it doesn't allow overtly sexual or pornographic material,
as well as defamatory, discriminatory, or means spirited content, especially
if the app is likely to humiliate, intimidate, or harm
a targeted individual or group. The Google play Store bands
(41:38):
apps that contain or promote content associated with sexually predatory
behavior or distribute non consensual sexual content, as well as
programs that contain or facilitate threats, harassment, or bullying, and
there is some precedent for this, because both Apple and
Google have removed other kinds of newdify apps from their
platforms because they're not allowed. However, the standalone Grock app
(42:02):
is still available on both Apple and Google, so I think,
you know, private companies, you just cannot always count on
them to take action against other private companies.
Speaker 2 (42:14):
I think that just really shows.
Speaker 3 (42:15):
Like the weakness in having the you know, barrier of
accountability be private companies making decisions about other private companies,
which brings me to the law. So I am no
lawyer just to give that warning upfront, but the framing
that I am taking is that to me, this seems
(42:36):
like criminal behavior. I understand this as a criminal enterprise
that Elon Musk is personally financially profiting from. And so
I am not a lawyer or a legal expert, so
like I would love to have somebody explain to me
how it is not criminal, But I understand this to
be criminal activity. As I said, forty five states have
(42:58):
laws that criminalized AI generation or computer edited child sexual
abuse material. If you or I used AI to create
such material, we would probably be having legal trouble. If
we were financially benefiting from the sale and trade of that.
Speaker 2 (43:14):
Material, we would have legal trouble.
Speaker 3 (43:16):
I do not understand why Elon Musk does not have
legal trouble in the United States over this, but he
does not.
Speaker 1 (43:22):
Yeah, and I know every time we do these episodes
forever ago. We did one about Xbox, which wow, oh my.
Speaker 2 (43:29):
God, I forgot about that.
Speaker 1 (43:31):
But you know, you have those like where the rules
the regulations of a platform, and they'll say something like
you can't do this or you'll get in trouble, but
they're doing it and they're not. And so I remember
it was a big push on Twitter before it became
x to report Donald Trump or anybody it was inciting violence,
(43:53):
which was against their guidelines, and be like this is it,
and you know you're never going to hear anything about
that ever. See anything happened about that. Recently, I tried
to report YouTube for something and it was almost impossible
to do. It's like, but this is against the guidelines
you said that you had, and now you're ignoring it,
(44:14):
giving me no avenue to really to really complain about
it or low to complain to like, hey, this is
violence or this is illegal or sexual content that should
not be there exactly.
Speaker 3 (44:30):
And you know, speaking of Trump, people might remember that
I think around last year he signed in the law
that Take It Down Act, which makes it illegal to
knowingly host or share non consensual sexual images, and so
people might be thinking, well, why is like, shouldn't this this.
Speaker 2 (44:46):
Law prevent this.
Speaker 3 (44:48):
One main thing to know about that law is that
companies do not have to respond or do anything until
a victim reports it. And so if nobody is reporting
this to the police, to exesn't have to do anything.
So so I just wanted to note that because if
you would reasonably think like, we have a law against
this now not really?
Speaker 1 (45:08):
Well, well, what has the response from xpin about all
of this?
Speaker 3 (45:17):
Well, as I mentioned, people were really circulating a I
believe user generated response from GROC quote I deeply regretted
an incident on December twenty eight, twenty twenty five, where
I generated and shared an AI image of two young
girls estimated ages twelve to sixteen in sexualized attire based
(45:38):
on a user's prompt. It was a failure in safeguards
and I'm sorry for any harm caused. And I mentioned this.
I wanted to say this, and I mentioned earlier how
all of these headlines about how grok is apologizing and
taking responsibility again, Grok is not sentient and setting it
up like a non sentient piece of technology could take
(45:59):
the blame from or something that humans did, and humans,
you know, facilitated it lets the humans off the hook,
because humans like Elon Musk are really not doing anything.
Per indicator, Musk, while this was all going on, shared
at least thirty different posts celebrating Groc and talking about
(46:20):
how great Groc is. While this was happening between January
seventh and eighth, he has not expressed remorse for what's
happening on X, and in fact, has been basically joking
about it. I will say that at one point he
might have been like, oh, let should probably ly say
something that's you know, it's not often joke. He did
say that anybody who used Groc to create anything illegal
(46:43):
will face consequences. They haven't, but he's doing that while
also laughing about the fact that Groc is being used
in this way. As cat ten BArch put it, the
reality is that X has not taken this as seriously
as one of Groc's user prompted posts might seem to suggest. Instead,
Musk has encouraged, laughed at, and praised grok for its
(47:03):
ability to edit images of fully closed people into bikinis.
Growk is awesome, he tweeted, while the AI was being
used to undress women and children, make it look like
they're crying, generate fake bruises and burnmarks on their bodies
and write things like property of Little Saint James Island,
which is a reference to Jeffrey Epstein's private island, and
sex trafficking. So yeah, he is not taking it very seriously.
(47:29):
And in one of her pieces, Kat said that she
reached out to X for like an official comment on
the record, and she got back an automated email that
just says legacy media lies, which, by.
Speaker 2 (47:40):
The way, Kat's not legacy media.
Speaker 3 (47:41):
She runs an independent news outlet, So that's not even
like a it's a complete non sequit er. It's like
not even a relevant thing to say. As she puts it,
there's no reason to make X and Musk seem more
concerned about this than they really are. They've known about
this happening the entire time, and they've made it easier
to inflict on victims. They are not investing in solution,
they are investing in making the problem worse.
Speaker 4 (48:03):
YEP, a lot of.
Speaker 1 (48:04):
Heavy size this episode. I keep going back to the
point you made at the beginning, Bridget, that is it's hateful.
It's very very hateful. It's not just like, oh, let's
(48:30):
put women in bikinis or children in bikinis, which is
horrible enough, but it's like actively these attacks that I
mean are damaging if you're just going online and this
is what you face, that can impact your entire day
that you weren't expecting to have to deal with this.
And I do think for sure that they are investing
(48:54):
in making the problem worse, which I guess does bring
us to the question of, well, where do we go
from here.
Speaker 3 (49:03):
Yeah, so, I'm sorry to say I don't think the
United States is going to do anything at all about this.
I think the way that I've been hearing elected officials
talk about it, I don't think anything's going to happen
to MUSCA in the United States. Kat ten Bard spoke
to doctor mary Anne Franks, who drafted the template for
several laws against non consensual distribution of intimate imagery. She
(49:24):
told Kat the FTC has made it clear that they're
fighting for Trump. It's actually never going to be used
against the very players who are the worst in the system.
X is going to continue to be one of the
worst offenders and probably one of the pioneers of horrible
ways to hurt women. And unfortunately, I do think that's
the case in the United States. However, Europe is not happy.
(49:46):
Earlier this week, a spokesperson for the European Commission criticized
the sexually explicit, non consensual images generated by GROCK, calling
them a legal which they are appalling and saying they
have no place in Europe. Then a few days later,
the EU ordered X to retain all internal documents and
data tied to GROC through twenty twenty six, which extended
(50:07):
their earlier directive to preserve evidence relevant to the Digital
Services Act, even though there was no new formal probe announced. Similarly,
regulators in the UK, India, and Malaysia have also signaled
investigations into X now. In response to this, X announced
that they were going to restrict the ability to create
(50:28):
images using ROC to users who pay for premium who
pay for a blue check. This is not at all
a fix to the problem. All it means is that
the ability to make non consensual sexualized content will become
a premium service. So not only is it not addressing
the problem, it's just becoming a way for X to
make money off of it. Again, I would love for
somebody to explain to me how this is not a
(50:51):
financially beneficial criminal enterprise for Elon Musk.
Speaker 4 (50:55):
Well, that also goes to show that if we as
a country fill like another country is harming us. So,
you know, child porn is pretty bad and seen as
like a whole international thing, right, trying to prevent trafficking
and such. Could these other countries and invade and try
to arrest Musk like the US?
Speaker 3 (51:17):
I mean, great question, right, since since the United States
can just rush in and kidnap a corrupt elected official or.
Speaker 2 (51:27):
Leader, I don't know, I would be curious to know
where that begins it ends. I mean saying if you'll
take it requests.
Speaker 4 (51:37):
CC is pretty high up on that, you know, no
no list.
Speaker 3 (51:40):
So yeah, so X trying to make it so that
you have to just pay for premium in order to
generate images with ROC. The British government is really not
impressed by this, a spokesperson told The Guardian. Quote the
move simply turns an AI feature that allows the creation
of unlawful images into a premium service. So that's pretty
(52:04):
much what's going on, right, Like, I think we'll see
if other countries cracked down on this. I mean, even
doing the research for this episode, it's like I had
to go to the Guardian, I had to go to
UK or overseas papers to get real reporting, because it's
(52:24):
just I don't think that people are really taking it
as seriously. Here in the States, regulatory bodies definitely are
not right obviously.
Speaker 4 (52:32):
I mean he has by creating this premium X standard,
he's kind of become the Epstein of child porn.
Speaker 2 (52:40):
Yeah.
Speaker 4 (52:40):
I mean, if you can pay for it, then.
Speaker 3 (52:41):
Go for it. I mean I have to imagine Epstein
is probably financially benefiting from the trade and exchange of
minors for sex. Again, I would love to have somebody
who knows more about the law explained to me how
Elon Musk becoming personally financially enriched through the generation of
a legal material is not similar. It seems very similar
(53:02):
to me. And you know, to sort of wrap up,
I think it's important to note that this. You know,
we've been talking about this as a tech issue, which
it definitely is, but it's really a culture issue. And
I just I kind of keep coming back to that
again and again and again, because you know, when Google
released this that their like AI image generator Nano banana
(53:23):
pro were of like conventionally attractive women. When you go
to that the Nano Banana subreddit, it's basically like post
after post after a post of like hot woman, And
it just makes me realize how much technology is being
used to reinforce this, this worldview where women are just
meant to be consumed and controlled and exploited, And to me,
(53:47):
it really does express this desire to live in a
world where women just exists to be consumed and controlled
and stripped of whatever agency we have been managed to
claim over our own bodies, our own on our own lives.
And I think until we can front that reality, this
is not a problem that you can build technical safeguards
to fix. I do think that these platforms should be
(54:08):
building technical safeguards, and we should be advocating for that,
But I don't think it's a problem where you can
like guardrail your way out of it. People who have
been reporting on deep fakes since before we had a
word for it, folks like Samantha Cole, have really pointed
this out. Cole writes in twenty eighteen, less than a
year after reporting the first story on deep fakes, I
wrote about how it's a serious mistake to ignore the
(54:29):
fact that non consensual imagery, synthetic or not, is a
societal sickness and not something companies can guard. Railegance into infinity.
Users feed off one another to create a sense that
they're the kings of the universe, that they answer to
no one. This logic is how you get in cells
and pickup artists, and it's how you get deep fakes,
a group of men who see no harm in treating
women as mere images and view making and spreading algorithmically
(54:52):
weaponized revenge porn as a hobby as innocent and timeless
as trading baseball cards. I wrote at the time that
is what the root of deep fakes, and the consequences
of forgetting that are more dire than we can predict.
Speaker 2 (55:05):
And yeah, I just really really agree with that.
Speaker 3 (55:07):
I think that it's a tech problem and we should
be talking about it as a tech problem, but deeper
than that, it's like a societal and cultural rot at
the heart of our society that we really do have
to deeply address, because the deep fakes is only but
one aspect of how this is showing up in increasingly
dangerous ways.
Speaker 1 (55:27):
Absolutely, and I think you've put it so well when
you're it's there's just so many different threads to this.
I can think of so many different episodes we've done
that this rings true, Like I can't even list all
of them, but of course a lot of video game
episodes come to mind for me.
Speaker 4 (55:47):
Meloliness epidemic.
Speaker 1 (55:48):
Yeah, but it's true. It's like everywhere, and it's I
think one thing. I know we've used the word alarming
a lot in this episode, but one thing that really
struck me in this was that huge rise in the
request for like non consensual images or just like it
(56:11):
does feel like it's getting worse. It does feel like
it's getting worse. And we everybody who's listening and everybody,
all of us, we know that this has been a problem.
We've been saying it for a long time, and now
it's worse.
Speaker 3 (56:31):
I hate to leave it there. I hate to not
have there be I mean, I think the thing that
makes me feel a bit hopeful is that I think
that most people agree that this is despicable. I think
that the people who are creating this and benefiting from
it are a minority of people, and I think that
(56:53):
there are forces that would like to normalize its behavior,
but it's not really taking right, Like it's normal among
but I don't think. I think that the way that
people are responding to it, it's clear to me that
there are more of us that think this kind of
thing is unacceptable than there are people who are getting
their rocks off to it. So maybe that's a little
bit of hope. I do agree it's getting worse, but
you know, there are more of us than there are
(57:15):
of them.
Speaker 1 (57:18):
Oh my goodness. Well, you know, I'm wondering, did you
say The Social Network has a sequel?
Speaker 2 (57:27):
Yes? Oh my god, Yes it does have a sequel.
Speaker 1 (57:33):
I'm just curious what that would be now. So will
we ever get an elon MUFK version?
Speaker 2 (57:39):
Oh my god.
Speaker 3 (57:40):
I want Aaron Zorkin to keep making sequels. I want
it to be like a trilogy. So we've got The
Social Network too, officially cut titled The Social Reckoning is
a confirmed sequel companion film written and directed by Aaron Sorkin,
set for release on October ninth, twenty.
Speaker 2 (57:56):
Twenty six, So coming up not too long.
Speaker 3 (57:59):
It's a blow Facebook's Harmful Social Impacts, based on The
Facebook File, starring Jeremy Strong as Zuckerberg, Oh Mikey Madsen
as whistleblower Francis Hogan and Jeremy Allen White.
Speaker 2 (58:12):
You know you go ahead and pencil it in for
me coming back to to talk about it. Go ahead
on write that right when you get your calendar together.
Speaker 1 (58:19):
And but I was thinking about it because we still
haven't done our watch because Samantha's never seen.
Speaker 2 (58:26):
I've never seen it the social network, So watch it
this weekend.
Speaker 1 (58:33):
I'm wondering how it will hit differently now, That's what
I'm curious about, because I haven't watched it in a
long time. Knowing what we know now, I'm wondering how
it will hit now. But yeah, I think we should.
I think we should pencil it in. I think I'll
get cale see you in October. Well, thank you, thank you,
(58:54):
thank you so much, Bridget. This was definitely a tough topic.
So we appreciate you doing the work and doing the
research and helping us go through it because it is
so important. And happy happy new Year, and we're looking
forward to continuing to work together.
Speaker 2 (59:14):
Happy new year to you, Happy new year listeners. Thanks
for having me.
Speaker 1 (59:18):
Yes, and where can the good listeners find you? Bridget?
Speaker 3 (59:22):
You can listen to my podcast. There are no girls
on the Internet. You can find me on Instagram at
bridget Marie and DC and you can check me out
on YouTube. There are no girls on the internet.
Speaker 1 (59:32):
Yes, and definitely go do that. If you have not
already listeners, if you would like to contact us, you can.
You can email us at Hello at stuff I Never
Told You. You could find us on Blue skyt Mom
Stuff podcast, or on Instagram and TikTok at stuff We
Never Told You. We're also on YouTube. We have some
merchandise at cop Bureau and we have a book you
can get wherever you get your books. Thanks as always
too our super due Christine or executive pducer my undercomputer Joey.
Speaker 2 (59:54):
Thank you and thanks you for listening.
Speaker 1 (59:56):
Stuff I Never Told You is production of by Heart
Radio FORUR podcasts from my Heart Radio, chck out the
Heart Radio app, Apple podcast or if you listen to
your favorite shows