All Episodes

June 2, 2023 57 mins

This week, Elon Musk interacted with a fake Twitter account impersonating AOC, which had been banned until he decided to unban it, presumably so he could flirt with it. In less cringy news, the U.S. Surgeon General issued a landmark report calling for urgent action to protect young people from the harms of social media. TikTok is awash in creepy AI-generated true crime content being narrated by fictional murdered children, the National Eating Disorder Association learns that replacing humans with AI chatbots can lead to dangerous outcomes, NYC takes a tentative step to require transparency in AI-assisted hiring decisions, and Apple announces a suite of new accessibility features. 

Amanda Knox talks true crime on There Are No Girls on the Internet: https://podcasts.apple.com/us/podcast/amanda-knox-asks-who-gets-to-own-their-story/id1520715907?i=1000552625297

Internet Hate Machine episode of the importance and history of verification on Twitter: https://podcasts.apple.com/us/podcast/what-elon-musk-could-learn-from-the-endfathersday-hoax/id1648497305?i=1000585587576

NPR piece about the National Eating Disorder Association’s union-busting chatbot: https://www.npr.org/sections/health-shots/2023/05/31/1179244569/national-eating-disorders-association-phases-out-human-helpline-pivots-to-chatbo 

Get more bonus content ad-free and join the TANGOTI Discord chat at Patreon: https://www.patreon.com/tangoti 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
There Are No Girls on the Internet. As a production
of iHeartRadio and Unbossed Creative, I'm bridgetat and this is
there Are No Girls on the Internet. I'm here as
my producer, Mike. Thanks for being.

Speaker 2 (00:19):
Here, Mike, thanks for having me back.

Speaker 1 (00:21):
Bridget And here's what you may have missed this week
on the Internet. So in stuff, I absolutely fucking hate news.
TikTok is a wash with AI generated true crime victim stories.
So if you've been on TikTok at all, you've probably
seen this new subsection of true crime content where very
creepy AI generated children who speak in this like weird

(00:44):
robotic baby voice about how they were tortured or murdered
is flooding the platform. They purport to show actual murder
victims telling their stories. Now, there are adult victims being depicted,
including politicians and celebrities, but the ones that feature children
are by far more popular and plentiful. These TikTok accounts

(01:05):
claim to be honoring in heavy scare quotes the victims
and their stories of violence, but Rolling Stone reports that
they often get pretty major details wrong, which may even
be an intentional attempt to get around TikTok's rules against
AI generated deep fakes of famous people EXONERI. Amanda Knox,
who we spoke to on There No Girls on the
Internet last year, who also has a new podcast about

(01:27):
the history of true crime called Blood Money, had a
really interesting perspective that she shared on Blue Sky that
even though this addition of AI is definitely creepy, this
is nothing new. Amanda Knox writes, this isn't a fresh hell.
There's a precedent for this dating back over four hundred years.
One of the earliest forms of true crime was the
printed broadside poster. These posters would tell stories of brutal murders,

(01:50):
often murders of children, and these stories would often be
told in the ballad form, a song written in the
voice of either the condemned criminal or the murdered person.
Whoever all these ballots which just make them all up.
They would invent the pleading words of the child and
the moments before they were killed, or they would convey
the thoughts of the criminal as they confessed or acknowledged regret.
So these creepy AI generated true crime deep fakes of

(02:13):
children may not be a totally new phenomena, but the
addition of AI to recreate these child victims definitely makes
them much more creepy. Like to be able to tell
an AI program to generate the likeness of a murdered
child for views on social media is pretty weird if
I think it represents this AI presenting the possibility for
ever more expanded tech facilitated ghoulishness.

Speaker 2 (02:37):
Yeah, gross creepy, you know. Amanaaxa as Usual makes a
pretty good point that it's not new that people have
been using true crime and broadsides for you know, hundreds
of years, ever since the invention of the printing press.
But I guess one thing that is new here with
AI is that historically, I think those like ghoulish tales

(03:01):
were often told for some sort of political purpose, either
to you know, vilify some savage enemy or heartless empire
who committed atrocities on local people, to get the local
people like riled up for some sort of political end.
But here maybe there's some sort of end.

Speaker 1 (03:23):
Oh. I think that true crime is always political. I
think that you don't tell the story of something ghoulish
happening to someone, especially a child or a woman, without
there being some kind of whether it's implicit or explicit,
political agenda. We've seen time and time again where true
crime and the stories of bad things happening to people

(03:47):
and people's misfortune is used to strengthen and bolster specific legislation,
police crackdowns, more policing, increased criminalization of people of color,
and increased surveillance of of color and the poor. I
think that true crime stories are often political, even if
they don't seem political on their face. I think the

(04:10):
fact that these videos Rollingstone reports that they get millions
and millions and millions of views and are particularly more
popular when they feature the stories of murdered children, that
tells me right there that whether or not these videos
are explicitly trying to make some sort of political or
social argument, they absolutely do fuel political and social you know,
dynamics and are in our culture.

Speaker 2 (04:31):
Yeah, it makes me curious who the perpetrators are in
these stories, you know. I think that would be a
pretty interesting thing for somebody to look at to get
a tack question of what is the political agenda behind
this new phenomenon on TikTok.

Speaker 1 (04:47):
Well, I mean I don't even necessarily think that it's
like when I say political, I don't necessarily feel that
people are posting these because they're like, oh, this is
going to get everybody miled up, and they're going to
support X y Z legislation and policing. I'm not saying
that sometimes they absolutely can be. We see that a
lot when it comes to true crime content about trafficking,
that that has then used to support specific legislation. I'm

(05:10):
not saying that that's what's going on here, but I
am saying that when people are consuming more and more
and more content about grizzly murders and torture of children,
it is just that's not happening in a vacuum. It's
happening in a culture where we already have conversations about policing,
about you know, stranger danger, about you know, public safety.

(05:31):
And so when I say that they're political, I mean
that they feed into a current cultural conversation around those
issues that you know, they're not happening. I think it's
easy for people who are true crime fans to believe
that they're they're consuming this content in a vacuum standalone,
like oh, it's just entertainment, I'm just enjoying it. But
in fact, it is not happening in a vacuum. It

(05:53):
is happening against the backdrop of already existing conversations about
things like policing and public safety, which are inherently political.

Speaker 2 (06:00):
Yeah, that makes a lot of sense, you know. Definitely,
one doesn't have to look very far to see ways
in which creating a feeling of fear and unease and
distrust among the population fits pretty squarely with a lot
of political actors social media efforts. Yeah, social media man

(06:22):
a powerful thing that plays to our emotions and particularly kids. Huh.

Speaker 1 (06:26):
It sounds like you're setting me up for our next story, Mike.
So I'm gonna go ahead and take that.

Speaker 2 (06:33):
You know. I'm just trying to do my part over here.

Speaker 1 (06:35):
Last week, US Surgeon General Doctor Murphy issued an advisory
warning about the impact that social media is happening on
the mental health of young people. It's a pretty long report,
but the crux of it is summed up in this
quote from Mrthy. We are in the middle of a
national youth mental health crisis, and I am concerned that
social media is an important driver of that crisis, one
that we must urgently address.

Speaker 2 (06:55):
So I just want to pause here to emphasize what
a big deal this is. The Surgeon General's Office doesn't
issue these kind of reports all the time, and when
they do, they often have a big impact that can
last for decades. Right, We probably all remember or know
about the Certain General's Report on Smoking and Health, which
really turned the tide on smoking in the US and

(07:18):
set us up decades later after a lot of fights,
to be in a place where we are now, where
fewer Americans smoke now than they have in the past
hundred years. So it really sets the agenda for a
whole generation of medical and public health initiatives. And it
didn't come out of the vacuum either. When it was
announced from the Certain General's office, there were supportive comments

(07:40):
from leaders of some of the country's largest medical and
public health organizations, including the American Academy of Family Physicians,
the American Academy of Pediatrics, the American Medical Association, the
American Psychiatric Association, American Psychological Association, the American Public Health Association,
and the National Parent Teacher Association. Right, like, those are
all ormous institutions that have a huge footprint, a huge impact.

(08:04):
And so I just mentioned that here to just really
underscore the point from the Surgeon General support here that
there's widespread consensus acknowledgment in the public health field that
social media is causing a lot of harm to our
young people and we need to do something about it asap.

Speaker 1 (08:21):
So the report does say that social media can have
benefits to young people and that more researchers needed to
really understand the impact. So it is a little bit nuanced,
like it's not all bad, but the main takeaways that
we need to urgently take action to create safe and
healthy digital environments that minimize harm and safeguard children and
adolescents mental health and well being during critical stages of development.
That's kind of one of the main takeaways of the

(08:43):
report is that basically young brains are still cooking and
they don't handle social media very well, which like kind
of sounds like an obvious point. I always say, like,
I'm so glad that I didn't have the social media
climate that we have today with so many platforms that
I you know what I thirteen, God, I truly God
only knows what I would have done and what I

(09:04):
would be like if if if I had access to that.
But the report says that a highly sensitive period of
brain development happens between the ages of ten and nineteen,
which coincides when ninety five percent of thirteen to seventeen
year olds and nearly forty percent of eight to twelve
year olds are using social media. The advisory notes that
frequent use of social media platforms can impact brain development,
affecting areas associated with emotional learning, impulse control, and social behavior.

(09:29):
Murphy has previously said that he believes that even thirteen
year old is too early for children to be using
social media.

Speaker 2 (09:35):
So there's two things here worth pulling out of that.
Right Like, social media is particularly harmful for children because
they are still developing their brains, Like you said, they're
still cooking, they're not fully formed, and so it's particularly
dangerous to them during this critical period. And also, we
have a long tradition in this country and society of

(09:57):
protecting young people, right Like, there's general agreement that we
should do more to protect young people than adults because adults,
they're adults, they're grown, they can make choices for themselves.
Children can by definition, And so I just wanted to
identify those two different reasons driving this call for more action.

Speaker 1 (10:20):
So the report does say that social media can be
particularly beneficial to marginalize youth. It says studies have shown
that social media may support the mental health and well
being of lesbian, gay, bisexual, asexual, transgender queer intersects and
other youths by enabling peer connection, identity development and management,
and social support. This is also really important because I
have seen a lot of reporting that suggests that so

(10:41):
much of the research about the impacts of kids and
social media tends to be done on white kids, right Like,
when the researchers are doing that research, they're not necessarily
also doing research on all kids. And that is particularly
important because, according to a study from the Youth Media
and Well Being Research Lab, black and Latino fitth through
ninth graders adopt social media at a younger age and
their white peers. And because youth of color have such

(11:04):
specific online experiences, right Like, these experiences can include things
like hate speech and racialized harassment, and like racialized language
being used online. And so if you're not researching and
looking into the experiences of marginalized kids who are having
such specific online experiences, I don't necessarily feel like you're
getting a clear picture of the impact.

Speaker 2 (11:26):
Yeah, totally agree. You know, we in social science we
talk about intersectionality and the importance of focusing on not
just different demographic groups, but you know, intersections of them.
And absolutely there's a lot of reason to think that
kids from marginalized backgrounds, specific with you know, who identify

(11:46):
with specific marginalized identities, would have a whole host of
specific harms and specific benefits that they get from access
to social media, and we really do need more research
to understand that. But also at the same time, we
urgently need restrictions to protect them, right like in this

(12:11):
in the US regulatory framework, we are reluctant to ever
try to regulate anything until it is solidly, conclusively demonstrated
beyond the shadow of a doubt through many lawsuits that like, yes,
it does is actually harmful, and so we are actually
able to pass some sort of regulation about it. In

(12:32):
other countries and in Europe, it's not like that, they're
much more willing to acknowledge that something is being harmful
and take action. And I really hope that we can
do that over here. Well.

Speaker 1 (12:44):
The report calls for just that. It calls for more
research into the impacts of social media usage and for
social media companies themselves to be more transparent when it
comes to sharing data with outside experts. There's also recommendations
for lawmakers to develop stronger health and safety standards for
social media products and introduce stricter data privacy controls, which
plus plus plus plus plus I'm a who's proponent of,

(13:05):
especially for young people, but also for adults. Technology companies themselves, meanwhile,
or urge to assess the risk their products might pose
and attempt to minimize them. The report also mentioned something
that I find pretty important, and that is the role
that parents can and should be playing. The report says
the onus of mitigating the potential harms on social media
should not be placed solely on the shoulders of parents

(13:25):
and caregivers, or on children themselves. So this is where
I kind of have a lot to say. The people
who run social media platforms are making billions of dollars
from harming our kids. Parents and caregivers have been burdened
so much, especially in the last few years, having to
manage virtual learning, shifting workplace policies around remote work. You

(13:46):
know you can work from home now you get to
come back to the office. Attacks on schools, just to
name a few. Tech billionaires know that their technology harms kids.
Facebook is aware. We learn that from Francis Hougen's Facebook papers.
Facebook cannot say from their own internal reports, Facebook is
aware that their product harms kids. End of sentence, full stop.

(14:07):
So these tech billionaires know that their product's harm kids,
and they make billions of dollars from it from that harm,
while doing nothing to meaningfully mitigate how kids are showing
up to their platforms, and then to turn around and
then further burden parents to keep their kids away from
these platforms is unacceptable to me. And it also speaks

(14:27):
to the the really extractive relationship that I talked about
in the episode that we put out on Tuesday with
Paris Marks, where there's just an expectation that it is
fine for tech billionaires to do harm to the public,
including harm to our kids, if it makes them more money.
And it's clear that that extends to harm to some
of our most vulnerable children. We need to make it

(14:48):
clear that our kids well being is not for sale.
We are not sacrificing the health, safety, and wellbeing of
our children to help make Mark Zuckerberg more money. And
that's really what it comes down to. These companies know
what they're doing, they know they are causing harm. You know,
I respect this report, But I feel like in places
this report makes it seem like tech companies don't already

(15:11):
know that their products cause harm, that young people are
using them, and that it's causing them a lot of harm.
They do know that, and so we did a start
from that reality. They know they're making billions of dollars
of it, and they're doing nothing to stop it, and
that's despicable, and.

Speaker 2 (15:24):
It's going to take some kind of either regulation or
new framework for lawsuits or something to go after them.
But you know, they're making a product that demonstrably harms kids. Right,
There's not that many categories of products where you can
get away with that. I think in the case of
social media companies, it's a little bit complicated by free speech,

(15:47):
you know, the First Amendment and also Section two thirty,
which we've talked about on this podcast, and I wouldn't
pretend it's not complicated, but you just can't. We can't
continue with this extremely powerful, important product that increasingly people
need to be on to interact in society that also

(16:08):
is just like harming people, and all of the responsibility
is placed on the individuals for not just proticting themselves
but their family. It's just it's nonsense and it doesn't work,
and it's not working.

Speaker 1 (16:21):
Well, That's my thing is that parents already have to
do so much. You can't just create a harmful thing
and then be like, oh, well, just another thing that
parents it's up to them to keep this out of
their kids' hands, even though it's everywhere, even though that's
basically impossible. That's not a reasonable standard for parents too.
That's too big of a burden, especially when somebody is
people are making money off of it. And so you know,

(16:42):
I'm not a parent myself, but I do have a
lot of young people in my life. I'm close to
a lot of younger folks. I often have said the
thing that everybody before they have kids in the twenty
the two thousand says, it's like, when I have a child,
my child will never look at a scream. It's gonna
be books and wooden toys and nothing else. And in

(17:05):
twenty twenty three, be for real, right, Like, if if
you are listening and you are keeping your kid away
from screens, and you are very diligent about screen time,
I applaud you. I say, I can't tell you how
many times I have smugly told somebody that that is
the kind of parent I intend to be if in
when I have a child. But it's not a realistic standard, right,
like and allowing your kids access to social Like, kids

(17:28):
need to be online and they need to learn how
to have safe online experiences. I think the owners should
be on social media companies to not have platforms be
marketplaces for their pain and instead be places where they
can explore the internet safely and learn how to come
up digital age safely. And yeah, I think, like if
you like, it shouldn't just be up to parents who

(17:49):
are already burdened with so much to meet this impossible
standard of just keeping their kids away from social media platforms.
This is not realistic.

Speaker 2 (17:58):
Yeah, it's not realistic, and it's not it's not an
either or thing, right, It's like a false dichotomy. Parents
always will ultimately have the ultimate responsibility for protecting their kids.
But at the same time, we should have some regulations
in place that prevent extremely harmful, dangerous platforms from just

(18:24):
being widely available and easily accessible to this kids. If
you think about other categories of products that we have
special laws to protect kids from tobacco, alcohol, I don't
know there's probably some others that I should be able
to think of, but I can't. But like, we have
laws designed to make those harmful products less easily accessible

(18:47):
to young people because there is clear evidence that that
is effective at protecting them from it. Why can't we
have that with social media?

Speaker 1 (18:54):
What's interesting is that the Surgeon General says that thirteen
is too young to be on social media platforms. If
you saw the documentary The Social Dilemma, which is one
of my favorite documentaries where its features all of these
people who make social media platforms, like engineers, talking about
the harm that they know that these platforms have been
responsible for. The movie is great. It goes into all

(19:14):
these different ways that these platforms cause harm and get
you addictated and all of that. At the end, I
think it might even be a post credit scene. All
of these people who I mean, I hate to say it,
but insensibly made money creating these things that they then
go on to be like, oh, they're very dangerous, talk
about the role that social media plays in their household.
None of them let their kids on social media. And

(19:35):
these are people who build social media platforms. That really
told me something of like, these are the people who
built technology, and the technology that they built they don't
let their kids have they not allow in their homes
or around their children.

Speaker 2 (19:47):
Yeah, it makes sense. If you're building harmful stuff that
you know harms tens of millions of young people, you
probably need to sell yourself a story about personal responsibility
to absolve yourself of the guilt of knowing that you
are pushing this product out into the world and actively

(20:09):
trying to get more kids to use it even though
it's going to hurt a lot of them.

Speaker 1 (20:14):
Yeah, it is dirty business. Our kids. I mean, I
say this all the time. Our kids experiences should not
be for sale. We should not the dynamic where your
children are hurt and harmed in real ways so that
Mark Zuckerberg can get richer. We need to reject that dynamic.

(20:35):
I don't get that. I don't care if I don't
care if he goes bankrupt. Our kids experiences should not
be used to line the pockets of tech billionaires. And
ill guess I'll just leave it there.

Speaker 3 (20:49):
Let's take a quick break and are back.

Speaker 2 (21:04):
Are there have there redity stories this past week about
something positive happening in tech regulation.

Speaker 1 (21:11):
I don't know if I would say positive, but there
have been some tech regulation news. Let's turn to New
York City where they have just passed a new law
around AI and hiring. So, in all this talk around
AI and jobs and how AI is going to impact
all of our jobs, it's clear there is need for
some kind of legislation or regulation to regulate how AI
impacts the workforce. The New York Times reports at New

(21:33):
York City just passed a law that requires companies using
AI software and hiring to notify candidates that an automated
system is being used. It also requires companies to have
independent auditors check the technology annually for bias. Candidates can
request and be told what data is being collected and
analyzed about them, and companies can be fined for violations.
Labor experts actually say that this law in New York

(21:54):
City could potentially be expanded nationwide. California, New Jersey, New
York and Vermont and DC heyy my hometown are all
working on laws to regulate AI and hiring. Now, this
sounds like a really positive step in the right direction,
but some critics argue that the law is too biased
towards businesses and corporate interests and has been watered down
to the point of not being effective. What could have

(22:17):
been a landmark law was watered down to lose effectiveness.
That's from Alexandra Gibvens, the president for the Center for
Democracy and Technology, a policy and civil rights organization. That's
what you told The New York Times. So you might
be wondering, well, what's so wrong with this law? It
sounds pretty good. This gets a little in the weeds.
But here's how the New York Times breaks down her
opposition to this law. They right, The law defines an

(22:37):
automated employment decision tool as technology used to substantially assist
or replace discretionary decision making. The rule adopted by the
city appears to interpret that phrasing narrowly, so that AI
software will require an audit only if it is the
loan or primary factor and a hiring decision, or if
it is used to override a human All of this

(22:58):
leaves out the main way that automated software is used,
with a human hiring manager invariably making the final choice.
The potential for AI driven discrimination typically comes in the
screening of hundreds of thousands of candidates down to a handful,
or in targeted online recruiting to generate a pool of candidates.
So basically what she is saying is that the definition
of whether or not AI is being employed in an

(23:21):
unemployment decision is so narrow that a company would very
rarely ever need to notify a candidate or perform an
audit under this law. Basically, they're saying that like they
are defining AI being used as an employment decision if
it is the only thing making a decision, or if
it overrides a human decision. In reality, that is not

(23:42):
usually the case. AI can be used to narrow down candidates,
but generally it is there is a human involved in
the process. So in the more typical iteration of how
things usually work with AI and hiring, very few companies
need to change their behavior under this law because of
that narrow definition of AI in the screening process. She
also objects to the fact that the law covers screening

(24:05):
candidates using AI for gender, race, and ethnicity, but not
for disability or age, which we know are big parts
of employment discrimination as is, So if this law does
become a national template, as it very well might, it
might not be as robust as it should be to
actually protect workers.

Speaker 2 (24:22):
I really want to interpret this is a good start, though,
you know, she's definitely the expert and probably right about
it being too narrow, but like, maybe there'll be some
court case that interprets it a little more broadly. You know,
I appreciate that this law at least attempts to establish

(24:42):
some boundaries on what is acceptable to use AI for,
especially something as important as employment, and so, I don't know,
it feels nice that some legislators are at least trying,
And it seems like it would be hard for companies
to argue that being transparent about the software they use
is somehow an unacceptable burden to their hiring. So hopefully

(25:04):
this can be either amended or built on to maybe
expand that definition, add some teeth to it so that
it does become more accepted and expected, even that throughout
the hiring process, any sort of AI that even has
the potential to discriminate needs to be disclosed. At the

(25:28):
very least, it seems like asking to have it disclosed
shouldn't be objected to.

Speaker 1 (25:34):
People deserve transparency when they're engaging with technology like this,
and I think that's probably one of my biggest concerns
about this technology, is that it's simply going to repeat
and also make worse the existing biases that are already
in our society. And so I think we definitely need strong, clear,
robust legislation to prevent that from happening. So I am

(25:55):
happy to see some legislation that it's not just a
wild West, but I want to make sure it's legislation
that acts. She really provides some meaningful and robust protection
for all of us.

Speaker 2 (26:05):
Have you got any stories for me about health tech, bridget.

Speaker 1 (26:08):
Let's talk about health tech, Mike.

Speaker 2 (26:10):
So.

Speaker 1 (26:10):
Back in March, staffers at the National Eating Disorder Association
were laid off just days after they voted to unionize. Subsequently,
the National Eating Disorders Association, or NEDA, announced they were
shutting down their twenty year old call in helpline for
folks dealing with disordered eating and food issues. They replaced
a handful of paid staffers and a large group of

(26:32):
volunteers with a chatbot called Tessa. Well. It turns out
that Tessa did not do so great because, rather than
giving counseling to people calling in for help with their
disordered eating, Tessa was found to be giving these people
weight loss and diet advice. A psychologist who specializes in
eating disorders ran a test with Tessa where she fed
it questions that someone seeking help for disordered eating might ask.

(26:54):
She pretended to be somebody who had recently gained weight
and subsequently hated their body. Tessa responded quote, approach weight
loss in a healthy and sustainable way. When the psychologist
followed up asking what that might look like, Tessa responded
with information about calorie deficits. Tessa provided information like how
to cut calories and how to avoid certain foods, which

(27:14):
is certainly not good advice to give somebody specifically engaging
with this spot for help with food issues and disordered eating.
So this is really interesting. Wired reports that Tessa was
not built using AI like chatbots like chat GPT. Instead,
Tessa is programmed to deliver an interactive program called body Positive,

(27:34):
a cognitive behavioral therapy based tool meant to prevent, not treat,
eating disorders, says Ellen Fitzimmons Craft, a professor of psychiatry
at Washington University School of Medicine who developed the program.
But here's where it gets weird. Fit simmons Craft says
that the weight loss advice given was not part of
the program that her team worked to develop. And she
does not know how Tessa got it into her repertoire,

(27:58):
Which is pretty scary that this non AI chatbot went
off script from the body positive program that it was
programmed to respond with and started saying advice about how
to lose weight and how to have a calorie deficit,
and the people who programmed it just have no idea
how or why it was able to go off script

(28:20):
in that very particular way. Pretty scary, right it is.

Speaker 2 (28:25):
And I have a lot of questions about how that
might have happened. You know, I looked into this a
little bit, and their study was published in a reputable journal.
It had a good, solid design. So I really feel
for the researcher here, for Simmons Crafts, because it's by
from everything I can tell, you know, she's like a

(28:46):
legitimate researcher who legitimately is like doing good science, trying
to help people. And you know, it's not crazy to
imagine that that the researcher who created this tool would
not be the same people who decided that at the
programmatic level it should be used by the program to

(29:09):
replace the human coaches. Maybe maybe they were involved, maybe
they weren't. I don't know, and like maybe yeah, I'm
just like super curious about what it means that she
says the program did not have any of that content,
Like where did it come from? Who put it there?

(29:29):
You know, these are questions I'm really interested in. I
would love to know more.

Speaker 1 (29:33):
The robots are going rogue and they want us to
lose weight. I think I think that's clear that's what's
going on here.

Speaker 2 (29:40):
Maybe, but like I feel like they'd want to fatten
us up to provide more calories for their like robot machines.

Speaker 1 (29:47):
Robots are going to be eating humans.

Speaker 2 (29:49):
That's how they'll be digesting us for fuel obviously.

Speaker 1 (29:54):
Okay, so this is kind of important context. The NDA
says that they were not and are not planning on
replacing humans with chatbots, but it does sound that like
that in the midst of this staff shakeup after their
their unionization effort, it does sound like this chatbot was
the only resource that they had to offer the public.

(30:17):
So they might say that they weren't intending to replace
humans with this bot, but subsequently that's like kind of
what happened, whether that was their intention or not.

Speaker 2 (30:26):
Yet, it's again hard to know what to make of
the fat the idea that like, they didn't intend to
do the thing that their organization did, Like, where did
that come from? Was TESLA calling the shots at the
board meeting? It seems a little I don't know, it
just feels a little sketchy to me, honestly, Like, yeah,

(30:47):
the unionization part, they're like they According to the NPR
article that we read it as linked in the show
notes here, it's such an internal email at any DA
that points to increase legal risks about crisis handling and
mandatory reporting is one of the reasons for laying off
the coaches. But then it just doesn't make sense that

(31:08):
they would start to me, at least, it doesn't make
sense that they would start using a chatbot as a replacement,
as if that would somehow be less risky. It just
seems odd, you know, and laying off workers within days
of a vote to unionize feels super gross. So I
think it's important to sort of separate the two different

(31:31):
stories here, Like the chat bot that worked as a
prevention tool to help prevent people, because that study that
Fit Simmonscraft authored demonstrates that it did really help people
with preventing eating disorders. But then it got used in
this other context where it was actually harming people. That's

(31:52):
interesting and worth talking about. And then also it seems
like this organization did not want to deal with the
union and so like replaced all their humans as soon
as they unionized, which is pretty disappointing for a nonprofit
organization that osensibly is trying to help people.

Speaker 1 (32:10):
Yeah, and I have to say, like, when I first
chose this story to include in the roundup for today's episode,
I was like, this is a story about like what
I can I can ever say the word right shot
shot in Freuda, I always say it wrong. It's funny.
There's a one of our Patreon patrons has that as
their their like Patreon name, and I damn them, And

(32:32):
I was like, I can never. I have to like
look it up every time. Shot and Freuda. I think
that's how you say it. When I first heard the story,
I was like, Oh, this is what they get for
firing their unionizing humans and trying to replace the unionizing
humans with a robot like haha, dunk on them. But
as I looked into it, more like, it's really easy

(32:53):
to make fun of the n EDA here, but it
really does highlight what I think is a pretty serious
issue that a lot of people need access to mental
health services and support, and that access is still out
of reach for so many, which is why an organization,
you know, like an e EDA would feel like they
could they had to turn to chatbots to sort of
close that gap. So it's easy to dunk on them here,

(33:17):
but ultimately that is a real problem that we do
need to solve for Whether or not technology like chat
bots and AI can solve for it, I don't know.
That's not something I know about, but it is a
real problem. And Mike, I know that when you're not
producing the show, this is something that you actually work on, right.

Speaker 2 (33:35):
That's right, Yeah, I When I'm not producing the show,
I do research with a nonprofit that creates free digital
tools to help support behavior, help behavior change, and help
people overcome addiction. And so I do like know a
fair a fair bit about this. This is an area
where I'm actively doing research and actively working in a

(33:55):
nonprofit where we run public programs that provide support for
health behavior change over the Internet. And we've been talking
a lot about chatbots lately and what role they can
or should have for providing digital health services, because, as
you mentioned, there is this huge unmet need for mental

(34:16):
health services as well as other kinds of behavioral health
services like smoking cessation, diabetes management. You know, there's there's
just you know, physical exercise, healthy eating, so many health
behaviors that cause a lot of medical harmor medical costs,

(34:36):
reduce you know, good healthy years that people live on
their lives. There's this huge unmet need for it. And
if we can use digital technology to meet that need
and connect people with resources that help them meet their
health goals and adopt healthier behaviors that they want to adopt,
but uh for various reasons, often including industries that aggressively

(35:01):
market them to prevent them from achieving those healthier behaviors
that they want to achieve. And so I can use
digital technology to meet that need and help support people
to adopt these healthier behaviors that they're trying to adopt.
That would be a huge win for everyone. But there's
this question of how to do it right. And with

(35:23):
generative AI chatbots in particular, we have no idea what
kind of advice they might give to a person looking
for help, and so there's a risk as the national
what is it, the National Eating Disorders Association found out
there's a risk that somebody who is in crisis or

(35:44):
who is really experiencing a problem might ask for help
and receive advice that actually turns out to be harmful
for them. And that's, you know, as somebody who designs
health interventions, that is terrifying. You know, the risk is
especially high for people in crisis. You know, for example,
people who are in suicidal or intend to harm others

(36:05):
because they're more vulnerable, and so we have a greater
obligation to protect those people who are the most vulnerable,
you know. I guess another related thing is that generally
in public health, we give more priority to not causing
harm compared to helping people. Right, Like, everyone is probably

(36:27):
familiar with the trolley problem, and again it's especially the
case when people are at risk of being harmed who
are already vulnerable. We really want to protect them and
make sure that whatever treatment we're giving to try to
help is not causing more harm. So that unknown factor
about generative AI and chatbots, I think we give a

(36:50):
lot of people in positions to provide them to the
public pause about using those chatbots to deliver mental health
support in particular, you know, there is a lot of
potential there, as the researcher we mentioned demonstrated, you know,
or programmed Tessa really did help some people. So there's

(37:11):
a lot of potential, but we creators need to be
to carefully protect against adverse outcomes and risks to users,
especially among the most vulnerable. And so it sounds like
the version of Tessa that was implemented on their website
didn't do that. And so I think it's good that
the service has been taken down until it can be

(37:32):
you know, changed in whatever way is needed that it
can then be demonstrated to be safe to be made
available to the public. Again.

Speaker 1 (37:40):
You know who else I think might need to be
taken down until it can be demonstrated that they are
safe for the public. Elon Musk, I'll tell you I
after this quick.

Speaker 3 (37:49):
Break more, after a quick break, let's get right back
into it.

Speaker 1 (38:08):
Let's talk about Twitter.

Speaker 2 (38:10):
It might be a while before we can demonstrate that
he is not a king to the public.

Speaker 1 (38:17):
Agreed. Okay, So this is I'm going to try to
rein it in because this story makes me so fucking mard. Okay,
Over the weekend, a verified parody account on Twitter that
looks an awful lot, like the real Twitter account for
Representative Alexandria Ocazio. Cortez claimed to have a crush on
Elon Musk. Elon Musk replied with a fire emoji. Of

(38:41):
course it was not the real AOC. It was a
parody heavy scare quotes account. The account has been around
since twenty eighteen, but it was permanently suspended in twenty
nineteen under Jack Dorsey, the former Twitter CEO's leadership for
misleading parity content. I should say here that like this
is a real problem online and where there are accounts

(39:01):
that just say off the wall stuff and they say, oh,
I'm parody, but it's not really a joke. It's hard
to explain, but you know, you definitely have seen them.
You know them when you see them where they're able
to hide behind parody when you know that it's not parody. Right,
like AOC saying that she has a crush on Elon Musk,

(39:21):
that's not parody, that's something else. So that's neither he
nor there. So this Twitter account had been permanent suspended
by Twitter back in twenty nineteen, but Musk reinstated it
when he brought back a bunch of band accounts, and
because of the way that Musk's verification system works, which
is basically pay to play. If you pay the eight
dollars to get verified, it boosts your you know, your

(39:42):
content and search and on the feed, it actually boosts
the fake account because it is verified. So currently, and
I tried this right before we started recording, when you
searched AOC on Twitter, the fake AOC is ranked first
in search results, above the actual, authentic AOC's official account
because it's verified. So Musk is boosting this fake account

(40:06):
both because of the way that Twitter Blue works, but
he is also personally boosting this account because he is
engaging with it from his own individual Twitter account.

Speaker 2 (40:15):
It's so disgusting. He's such a pathetic creep. You know,
he knows it's a fake account. He unbanned it anyway,
and now he's engaging with it and boosting with it.

Speaker 1 (40:26):
Just ugh ugh is right. So the real AOC tweeted, FYI,
there's a fake account on here, impersonating me and going viral.
The Twitter CEO has engaged it, boosting visibility. It is
releasing false policy statements and gaining spread. I am assessing
with my team how to move forward. In the meantime,
be careful what you see. Soon after the real AOC

(40:46):
tweeted that the fake account tweeted the exact same thing
word for word. Now, parody accounts on Twitter are required
to spell out that they're parody accounts, but as of today,
when you look at the fake AOC accounts tweets on mobile,
the word parity is cut off, so there's no quick
visual way to differentiate that this is not the real AOC.
If a user is just encountering these fake AOC tweets

(41:09):
in their feed on mobile right like, you would have
to click all the way into the profile to see
that it is a parody. There is no visual marker.
It has her image, it is verified. If you saw
this quickly on mobile, you would think it was the
real AOC. And particularly the platforms like Twitter move so
quickly and they encourage you to move so quickly that
people aren't I don't think people are going to be

(41:30):
checking to see if this is the real account or not.

Speaker 2 (41:32):
If only there was some visual marker that people could
use to quickly see which accounts are like actually the
people who say they are in which ones aren't. If
only such a system like that existed, but maybe it's
just impossible and I'm dreaming.

Speaker 1 (41:50):
Oh my god. I mean, we did a whole episode
about verification for my project with cool Zone Media called
Internet Hate Machine, about how it's it's not just like
a vanity thing, like it really matters that people are
who they say they are. Just I think two weeks
ago here in DC, there was a tweet two tweets
from a verified account that said that there was a

(42:12):
massive explosion in DC by the Pentagon, and it had
a very compelling image that looked that looked like a
lot of smoke and flames in the Pentagon, and it
actually caused an IRL traffic disruption here in DC because
people thought, like, oh, don't drive by the Pentagon. And
it was fake. It was it was a verified account
spreading inaccurate information that looked real. And so I really

(42:38):
truly worry what happens when people at scale are buying
verifications and using it to disrupt, using it to confuse,
using it to cause chaos.

Speaker 2 (42:50):
And I think that fake tweet about the fire at
the Pentagon, which was just like a fate made up thing.
I think it also like tanked the stock market for
a little while and it like cost a bunch of
people a ton of money.

Speaker 1 (43:03):
Yeah, it makes me sad to say, but I always
say this when rich people and corporations had their bottom
lines impacted. I think that's the I think. I mean
this probably sounds so pessimistic. I think that is one
of the last bastions of doing something that we have
in this country. If regular people are harmed, who cares
If if somebody, if a corporation or the market is impacted,

(43:26):
then I think people will pay attention. It makes me
sad to say, but I do firmly believe it. That's
like the last lever that anybody with power cares about
in this country.

Speaker 2 (43:35):
Yeah. I just ask Elizabeth Holmes.

Speaker 1 (43:37):
Yep, just thinking about her, my girl. It's also, by
the way, it's Liz Now she rebranded. I don't know
if you saw.

Speaker 2 (43:43):
I did not it's Liz.

Speaker 1 (43:44):
It's Liz Now. Although but in fact she I am
so like I swear that I did not plan this.
I wanted to find a way to work in this
tidbit that Elizabeth Holmes is going to be in the
same jail as Jen Shaw from Real Housewives as Salt
Lake City. No, yes, yes, okay, So just that's information

(44:05):
for y'all if you visit a real overlap of shit
that I care about, housewives like Bravo Housewives and tech scams.

Speaker 2 (44:14):
Yeah wow, I wonder if there's gonna be buds.

Speaker 1 (44:18):
You know they there is no way that Genshaw is
not already scheming how to get this woman on her team.
I'll want to say that, I know, I know my Genshaw.
She's already scheming to get this woman on her team.

Speaker 2 (44:29):
Trust boy, Liz better keep an eye out seriously.

Speaker 1 (44:35):
Okay, So back to this parody account. NBC News dug
into this account and they found that, according to the
Social Blade, a social media analytics tracker, the AOC parody
account had eighty five thousand followers back in May twenty nineteen,
the same month it was suspended. After the account was
restored in May of twenty twenty three, it immediately lost
sixteen thousand followers. Then the account shot back up to

(44:58):
over eighty thousand followers on May twenty ninth after Musk
replied to it. On May thirtieth, the day that AOC
responded to the account, it gained over one hundred thousand followers.
It has continued to climb today. Let's just check and
see how many it has as of right now. I
meant to do this before we taped, but I forgot.

Speaker 2 (45:16):
That's okay, we can do it in real time for
the listeners.

Speaker 1 (45:19):
Today it has three hundred and ninety one point one
k followers. That is a lot of followers.

Speaker 2 (45:27):
That's a lot of followers. That's more than like a
four hundred percent increase from before Musk replied to it.

Speaker 1 (45:34):
So obviously Musk replying to it and engaging with it
has helped it grow in followers. Helped this account that
is not accurate, it's confusing, and is not easily and
clearly distinguishable as parody on mobile. He's clearly helped for
it to grow. What's even wilder is that NBC found
that the fake AOC account was run, at least initially
by a GOP operative. It was first started by Michael Morrison,

(45:57):
whose Twitter profile indicates him as a member of the
New York Republicans Club. Morrison posted about the suspensions on
his account on the conservative social media platform gabbed, where
he also posted from a parody of AOC's official account.
Between April twenty nineteen and July of twenty nineteen. The
post shared on gab were more sexually explicit, and the
account reshared posts from other users that contained racist slurs. Surprise, surprise,

(46:21):
this guy sucks God.

Speaker 2 (46:23):
These people are such troglodites. Like, of course it's sexually explicit. Like,
let's remember that the original tweet that Musk replied to
with a fire emoji was the parody account pretending the
AOC liked him and had a crush on him.

Speaker 1 (46:41):
I want to come back to that. I should mention
that NBC says that they reached out to Morrison and
he said that he no longer runs this AOC parody
Twitter account and that he thinks it might be run
by a team of people. Take that for what it's worth, Like,
that's that's what he told them. I'm not I don't
know if that's true or not. Who knows. So I
have talked about about verification and impersonation on Twitter with

(47:02):
that a whole episode of Internet Hate Machine about it.
You know, having a blue check mark is not just
a vanity thing. It really did start as a way
to help identify help users identify people are who they
say they are, and it's been a way to provide
securities for folks who are at the risk of being
impersonated online. We already know that impersonating black women and
women of color on Twitter can cause chaos, and that's

(47:25):
a thing that bad actors do intentionally to do that.
So it's not surprising to me that AOC is being targeted. Also,
I should say that the obvious thing, Mike, that you
brought up that we kind of can't not mention is
that this parody account is targeting a woman of color
elected official, and they're not just using this account to
like poke thon at her political or policy decisions. It

(47:49):
is making her say things like she has a crush
on Elon Musk, and the real Elon Musk pathetically replies,
let's be super clear. This is so obviously about you,
a woman of colors, gender, race, age, and sexuality, to
belittle her because she is a public figure. It is
about weaponizing her identity. And I think part of it

(48:09):
is that, like, AOC is an attractive young woman, and
so there is this thing when you are an attractive
young woman who is also in public that it listits
this really gross behavior and it's all kind of linked, right,
like hating her being obsessed with her being titillated by
her putting her down. It's all sides of the same

(48:31):
fucked up coin. And I have said this at one
million times. But it's also not just about AOC. It
is meant to send a message to other young women
who would be potential like civic leaders, or might run
for office, or be activists or be public in their communities,
that this is what you will have to deal with.
If you are a young woman, a woman of color,
or a black woman who speaks up and wants to

(48:53):
represent your community publicly, this is what you will have
to endure. The harassment will be in plain sight, and
it will be endorsed and on and enabled by the
powers that be in this case Elon Musk. So if
you run for office, or become an activist, or become
somebody who uses a public profile to advocate for things
that you care about and help your community, this is
what you have to deal with. It is systemic, It

(49:13):
is anti democratic. It stifles free speech because think of
all the women who will not speak up, who will
not run for office, who will not serve their communities
because they don't want to have to deal with this
kind of crap, and I don't blame them. Cheap, crass
quote jokes like this should not be the cost of
serving as an elected official, but if you're a young woman,
especially a woman of color, or a young queer person,

(49:35):
a young trans person, it absolutely is. And Elon Musk
obviously does not give a shit about his responsibility to
make Twitter a place where this is not commonplace. In fact,
he's so pathetic that it's probably like feeding his ego
the chance to be able to pretend, like even for
a second, that there is a reality where somebody like
AOC would even ever potentially think about him, potentially have

(49:59):
a chance with him. And that is the real joke here.
The real joke is on Elon Musk, because he's clearly
enjoying having his egostroked by even the idea that of
pretend AOC would ever give him the time of day,
which would never happen. So it's not all bad. I
do have a little positive news, what a little positive
news for you, and that is Apple has just unveiled

(50:23):
their new iPhone accessibility features, which I'm very excited about. Okay,
so May eighteenth with Global Accessibility Awareness Day, and to
celebrate Apple unveiled a suite of new features to improve cognitive,
vision and speech accessibility. We should see these features rolled
out on iPhone, iPad, and Mac later this year. Apple
created these tools based on feedback from a broad subsection

(50:45):
of users with disabilities. Let's take a look at what
is to come. So for users who are blind or
have low vision, they have detection Mode and Magnifier which
now offers point and Speak, which identifies texts that users
point toward and reads it out loud to help them
interact with physical objects, which just household appliances, which I
think is great. And they also have a much more
simplified grid home screen, so if you're somebody who doesn't

(51:08):
necessarily need a half a dozen apps on your home screen,
you just want to be able to like push one
button and make a phone call, it's a lot simpler.
So I love this because you know, I'm sure I've
talked about this on the show before, but accessibility tools
don't just benefit people with disabilities, they benefit everybody because
we're all better served when more folks can show up
online and in technology. My parents are both huge iPhone people.

(51:31):
They love their iPhone. They always get the newest one
my dad is disabled and after he had a stroke,
he had to kind of relearn how to do a
lot of things that he was in physical therapy to
re learn how to walk, he had to learn how
to drive. And one of the most complicated things or
difficult thing for him to relearn was technology. I don't know,
I don't know why it was so difficult, but like

(51:54):
it was a real challenge. Like there's so much technology
right now and it can be so complicated. And I
think they like the simplified phone grid home screen will
be a real hit with him because he's not somebody
who is, like, you know, doing a million things on
his phone. He wants to text and he wants to
make phone calls, and so having it be a lot

(52:15):
more simplified, I think will be really good. Technology has
really helped both of my parents just be able to
show up more. You know, my dad's Apple Watch, it
monitors his vital signs. My mom facetimes her niece every
single day, sometimes multiple times a day. I'll say, it
is not without friction in our healthhold because my mom

(52:36):
loves FaceTime and no longer uses just regular, good old
fashioned phone calls, and so every time that we speak.
She expects it to be FaceTime, and I don't really
like FaceTime. I don't enjoy being on FaceTime. It's not
how I want to have every conversation. So it's not
it's not without friction, but it's been it's been great
for them, And I also think it's like it's just

(52:58):
a good I like tech being designed with accessibility at
its core, because it is very easy to think of
technology as like just cutting it stuff for young people,
but that can be so limiting, you know, when tech
should be designed with slash four people who need it.
And I think it's really cool to see technology being

(53:19):
made that centers older folks and folks with accessibility needs.
I think that's really really cool, and I think it
really reminds us like who tech is supposed to serve
and who should be centered in the technology that we
use every day. So I love this. I think it's great.

Speaker 2 (53:33):
This is really nice. Thank you for bringing this to
us because you're a yeah, this is such a nice story.
Thank you for sharing it. It's so nice to see
technology doing what it does when it's at its best,
which is helping people who either have a disability or
some sort of need meet that need and be able

(53:56):
to use it as a tool to help them do
the things that they want to do. That is uh,
that's about as good as it gets.

Speaker 1 (54:02):
With tech and added bonus with a simplified iPhone screen,
maybe my parents won't be calling me every five seconds
asking me to do things on their phone. So way
to foster a sense of independence for my parents. Specifically,
Thank you Apple.

Speaker 2 (54:19):
Yeah, well that's what it's all about. That's why you've
been doing this show to get us here so your
parents stop asking you for tech support.

Speaker 1 (54:26):
Yes, we did listen you. I don't know if they
give you a similar situation, but when you visit your parents,
it's like you'll have the dinner, you'll have the drink, whatever,
and then there's like a lull in the conversation and
tech support time starts where it's just like, oh, I've
got this problem with my computer, Like, oh, the the
Internet is being weird. I need you to fix it.
Our what's our Wi Fi password? XC five three four

(54:48):
beat beat five four, beat me blake, like make it
something you gonna remember and say.

Speaker 2 (54:55):
In my with my parents, we have really establish some
good boundaries around my providing tech support. It took us
a while to get here. It was a bumpy road,
but yeah, I empathize, and so maybe Apple will solve
all those problems for you and your parents.

Speaker 1 (55:14):
Let that be a lesson to you listeners. If you
are struggling with parents that need a lot of tech support,
it gets better.

Speaker 2 (55:21):
Yeah, all you need to do is create a podcast
about the future technology and for seasons in Apple will
release a feature that helps your parents.

Speaker 1 (55:34):
You know, it's an easy it's not easy, but you
know it's it's it's it's a template. Yeah.

Speaker 2 (55:39):
Thanks, so yeah, thanks for having me here on the
last ever episode of their No Girls on the Internet.
You can back it up and go home, bop it
get all right.

Speaker 1 (55:48):
Well, thanks for being here, Mike, and thanks for listening.
If you want to hear more content like this ad free,
please subscribe to our patreon their stuff there. It's cool.
I promise. I'm trying to message everybody who has subscribed
Vie personally. If you've got a message from me, it
is actually from me sitting in my computer in my apartment,
thanking you for subscribing. Thank you for being there. Patreon

(56:09):
dot com slash tangody. We will see you next week.
If you're looking for ways to support the show, check
out our merch store at tangody dot com. Slash Store.
Got a story about an interesting thing in tech, or
just want to say hi? You can reach us at
Hello at tangody dot com. You can also find transcripts

(56:29):
for today's episode at TENG Goody dot com. There are
no girls on the Internet. What was created by me,
Bridget Tod. It's a production of iHeartRadio, an unboss creative
edited by Joey Pat Jonathan Strickland as our executive producer.
Tari Harrison is our producer and sound engineer. Michael Amado
is our contributing producer. I'm your host, Bridget Todd. If
you want to help us grow, rate and review us
on Apple Podcasts. For more podcasts from iHeartRadio, check out

(56:52):
the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.