Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
There Are No Girls on the Internet, as a production
of iHeartRadio and Unbossed Creative. I'm Bridget Tad and this
is There Are No Girls on the Internet. Hi, and
welcome to There Are No Girls on the Internet. So
we are starting a brand new thing on the podcast
where we are going to be rounding up news and
(00:24):
analysis about tech, the Internet and media that you might
have missed. I will do this every other week, but
if you want to get it weekly, be sure to
subscribe to our patreon at patreon dot com slash Tangody.
So let's get into it. I'm here with my producer, Mike. Mike,
thank you so much for being here.
Speaker 2 (00:39):
Thanks for having me.
Speaker 3 (00:40):
I'm super excited to be here for this inaugural news
around us.
Speaker 1 (00:44):
So we're starting out hot with a story that I
followed obsessively. I have watched multiple documentaries about it. I
have read books about it, I have watched Hulu series
about it. And that is my girl, Elizabeth Holmes. Elizabeth
Holmes has officially girl bossed a little bit too close
to the Sun and must report to prison on May thirtieth.
Y'all probably remember Elizabeth Holmes. She used to run a
(01:04):
blood testing startup called Pharaohose that said that using a
drop of blood they could screen for diseases and health information,
but actually it never did any of that. Elizabeth Holmes
like she wore a lot of black turtlenecks and sort
of always had this messy hair, and she kind of
talked like this all that's part of this like tech
CEO persona that was also kind of a fraud. Holmes
(01:26):
was convicted on charges of defrauding her investors in her
failed blood testing startup. She's appealing those charges, but the
court rejected her request to remain out on bail while
she appeals her case. She's sort of been able to
drag her feet on reporting to prison, but it seems
like maybe her time has finally run out now. Back
in November, she was sentenced to eleven years and three
months in prison for conspiracy and fraud against investors. Now,
(01:49):
she was supposed to report to prison back in April,
but her prison time was delayed while the court considered
her appeal. This comes after kind of a very glowing,
fawning New York Times like Redemption Arc profile that really
painted her in a sympathetic light. But let's not forget
that Elizabeth Holmes's scam hurt real people. There was a
(02:10):
mother with a history of miscarriages who was wrongly told
that she would never be able to have a baby.
There was someone who was given a false HIV diagnosis
and had to wait for months until they could afford
another test. And someone was even given a false cancer diagnosis.
Speaker 3 (02:23):
Yeah, that's what's so egregious about her story. You know,
as somebody who works in health tech, it was like,
really just egregious that she would just be lying about
what this technology can do. But it says a lot
about our legal frameworks for policing or regulating health tech
(02:47):
that none of her jail sentences related to harming those
people or the thousands of more people who you know,
received an accurate medical information. It was all just about
the investors. And you know, it's I'm like kind of
torn about this because it's good to see some justice happening,
(03:08):
but it's it's not happening because people were harmed. It's
just because investors were harmed, and it's a real shortcoming.
Speaker 1 (03:16):
No, I firmly believe had Elizabeth Holmes not harmed and
defrauded wealthy investors, rich people, she would not be going
to jail. I think that if it was if she
had only harmed regular ordinary people, which she did harm,
I don't think that she'd be facing jail time like
she is. And I also don't think that she would
be hated like she is. It is kind of a
sad thing that the reason why she's facing consequences is
(03:38):
probably because she defrauded investors, because people don't care when
you harm. I think that we've just kind of been
conditioned that you can be a tech company that harms
everyday people and that's totally fine. Harm wealthy people and investors,
then you're going to jail.
Speaker 2 (03:53):
Yeah.
Speaker 3 (03:54):
And so you know, as we think of talk about
on the show, like what are what is the look
like that does a better job of protecting people? I
think this is a real area whereas health technology becomes
a bigger part of the health landscape overall, we're really
(04:15):
going to need regulatory frameworks to address it, and right
now we just like don't have them to the extent
that we do in FDA. They're just very inadequate for
what's coming. And we haven't even talked about AI.
Speaker 1 (04:29):
Well, speaking of regulation, let's talk about those AI. Senate
hearings that also happened this week. There were Senate hearings
on AI this week. The hearings started with Senator Richard
Blumenthal playing testimony in air quotes from a deep fake
audio recording of his own voice that was written by
chat GPT and vocalized by an audio application trained on
his Senate floor speeches. You got to hear it. I
(04:50):
know that doesn't make sense, but you'll understand what I'm
saying when you hear it. You gotta hear it.
Speaker 4 (04:53):
Here we go too often. We have seen what happens
when technology outpaces regularly, the unbridled exploitation of personal data,
the proliferation of disinformation, and the deepening of societal inequalities.
We have seen how algorithmic biases can perpetuate discrimination and prejudice,
(05:19):
and how the lack of transparency can undermine public trust.
This is not the future we want.
Speaker 1 (05:27):
So that is pretty eerie to hear, right like that
sounds just like him. He makes a good point that
you know, it happened to be saying things that he
actually agreed with, but they could This is easily be
saying something that he would never say and doesn't agree with.
And you can sort of get an idea of the
implications of that kind of technology.
Speaker 3 (05:46):
Yeah, it was eerie, and it's it's such a good
little stunt, Like I kind of love it when senators
get up there and do little stunts like this. This
one is particularly good because it's not just an attention stunt,
but it really highlights how it's not just these individual
technologies that are at risk, but really the combination of
them that create some really like terrifying possibilities for the
(06:11):
future holds.
Speaker 1 (06:13):
So these hearings really seemed like very chummy. CNN reported
that before the hearings, Sam Altman, who you may know
is the CEO of Open AI who has basically kind
of become the face of AI, met with more than
sixty House lawmakers over a dinner. It was a bipartisan
gathering featuring an even split of Republicans and Democrats, and
Altman demonstrated like various uses of chat GPT to quote
(06:37):
much amusement, according to a person in the room who
described the lawmakers as riveted by the display. Yeah, something
about that. There's something about that. I don't like that.
This was such a chummy meeting of lawmakers and AI technologists.
I would like, I guess I'll just I'll just stop
(06:58):
it there. There's something about the chumminess that I find
a little sus What do you think.
Speaker 3 (07:03):
Yeah, absolutely, it is a little sus right, Like, usually
regulation is an adversarial thing, and in those occasions when
the industry is like leading the charge on the regulation,
it's usually weird.
Speaker 2 (07:19):
Right.
Speaker 3 (07:19):
There's usually some kind of like perverse incentive or like
agency capture or something going on, and it feels weird.
Speaker 2 (07:27):
Yeah, you're not the only one that feels that way.
Speaker 5 (07:29):
Yeah.
Speaker 1 (07:29):
And if you were just kind of casually following the
story via headlines that you didn't click into, you probably
saw that Sam Altman actually pleaded for Senate to regulate AI.
He said, we think that regulatory intervention by governments will
be critical to mitigate the risk of increasingly powerful models.
And so here's my take. This whole thing just feels
very suss to me. And Scott Galloway actually put it
(07:50):
really well on Twitter. He tweeted this very long list
of all the different times that tech CEOs made a
big show in public asking for their technology to be regulated.
And so I think that what so like Mark Zuckerberg
saying please regulate us, or Jack Dorsey saying please regulate Twitter.
And I think that what Allman is actually saying is, yes,
(08:11):
we totally agree we need regulation, and we would also
be very happy to write that regulation for you. I
think he's advocating for AI companies making the rules themselves.
I don't think that he's actually like genuinely asking for
lawmakers to regulate AI.
Speaker 5 (08:29):
Yeah.
Speaker 3 (08:29):
I think that's probably right. And you could forgive him
for thinking that Congress may do nothing, right, Like, they
have a long track record of doing nothing, And so
it also feels like a good way to just avoid
responsibility for having some sort of like ethical safeguards in
place on their own, which we can't expect them to
(08:50):
because they're companies.
Speaker 2 (08:51):
They wouldn't do that.
Speaker 3 (08:53):
But yeah, like calling for regulation, it feels like a
good way to pass the buck and be like, oh it, Congress,
why don't you regulate us.
Speaker 5 (09:04):
Let's take a quick break at our back.
Speaker 1 (09:18):
While speaking of passing the buck and posting while really
doing nothing, Let's talk about Montana. So this week, Montana
became the first date to ban the social media platform TikTok,
which is a pretty big step in the backlash against
this Chinese owned platform over concerns about things like data privacy.
Montana's governor signed the measure on Wednesday. He said that
(09:38):
he did so to quote protect Montana's personal and private
data from the Chinese Communist Party. The band takes effect
in January of twenty twenty four, but people can actually
still use TikTok after that, like, this law does not
punish them if they use TikTok, But what it actually
does is target the availability of the app by threatening
entities such as TikTok, Google and Apple with a ten
(10:00):
thousand dollars fine for each day that the platform remains
accessible in app stores for users in Montana. So from
the user end, you don't actually have to do anything
once this band, once this law goes into effect. But
eventually when you need to update TikTok for it to
run properly, that's when it will probably become an issue.
So it's kind of a way of like phasing TikTok
(10:21):
out because people won't be able to access the app
in the app store to like get the the updates
that you would need for it to run properly on
your device. According to the Washington Post, app stores are
divided by country or global region, and that they don't
change or discriminate based on what state a user is in.
Changing that system would require not just carving the stores
into state specific chunks, but also closer monitoring of people's
(10:44):
locations and a buy the minute system to define what
happens when, for instance, a user drives over state lines,
and keep it in mind that like people can always
just sort of get around this by using a VPN.
So it seems like this law will be very difficult
to enforce, to the point where you almost kind of
have to wonder what the point of this law would be.
The law would be nullified if TikTok is no longer
(11:06):
headquartered in quote any country designated as a foreign adversary
by the US government. The law will probably face a
significant legal challenge. Free speech groups like the acl YOU
have already been vocally criticizing this law, and five TikTok
creators filed a lawsuit saying that this ban infringes on
their First Amendment rights. In the suit, they say the
state of Montana quote can no more ban its residents
(11:27):
from viewing or posting to TikTok than it could ban
the wall street journal because of who owns it or
the ideas it publishes. So y'all probably know that we
did an entire episode with misinformation specialist and TikTok creator
Abby Richards about why she feels like banning TikTok just
is not a good idea. We'll link to the episode
in the show notes. Definitely worth listening to. But ultimately,
my take is that there is so much wrong with
(11:49):
our current data privacy laws and landscape in the US
that if lawmakers truly wanted to protect our data and
our privacy, banning TikTok would not really accomplish that. So
I think that laws like this one, especially given that
they're so difficult to actually enforce, are just sort of
meant to signal to us the American public, that lawmakers
are like cracking down on and sort of getting tough
(12:11):
with big tech while actually doing nothing. And I firmly
believe that what we need is meaningful policies to protect
our privacy and data, not just grandstanding and postering.
Speaker 2 (12:21):
Yeah, yeah, I agree.
Speaker 3 (12:24):
On the one hand, it's nice to see politicians taking
some action. On the other hand, this law seems like
bad in half of different ways, and like you said unenforceable.
I'm not very sympathetic to Google crying that they wouldn't
know how to do this, right, Like, you can geo
fence Google ads down to like a city block if
(12:46):
you want to. So the idea that they are going
to be completely unable to handle state boundaries is just dishonest, right,
Like they could technically figure it out if they wanted to,
but they don't want to. Not that this law is
a good idea because like you said, people can get
around it a bunch of different ways. It's probably unconstitutional,
(13:08):
but yeah, wouldn't it be nice if we saw more
thoughtful legislation that instead of just banning one app for
reasons that feel awfully sinophobic, what if they prohibited categories
of data use or prohibited categories of data collection so
(13:30):
that social media platforms could not harvest the personal data
of Montanin's without their explicit consent. You know, that feels
like it would be a much better approach to actually
protect people's privacy.
Speaker 1 (13:46):
There are a million different better approaches that are out there.
I firmly think that lawmakers want the credit for appearing
to do something while actually doing nothing. That is my
big problem. With all of this talk of banning TikTok.
Speaker 3 (13:59):
Yeah, and a big of what they're trying to do
with the TikTok band is feed anti China sentiment, which
I think there's a lot of good reasons to be
concerned about China's relationship with the United States and spying
on Americans.
Speaker 2 (14:14):
But we could definitely do without the racist rhetoric right.
Speaker 1 (14:17):
Pretty much always. You know who needs to hear that,
Elon Musk. I know that I said that I wasn't
going to talk about Elon Musk anymore like her my
own personal mental health, and I was going to stop
talking about Elon Musk, but I have to talk about
this interview that he did. So last week, Elon Musk
did this interview with CNBC. During the interview, who was
asked about his tweets about the mass shooting in Allen, Texas,
(14:39):
which was a total tragedy on May eighth that left
eight people dead and seven people wounded. Now, let's just
be super super clear. The Texas Department of Public Safety
has said that the shooter showed indications of holding neo
Nazi ideology, with an official saying that he had patches
he had tattoos, and multiple news outlets, including The New
York Times, confirm this too. But Elon Musk basically just
(15:00):
said a bunch of inaccurate stuff about the shooting. Elon
Musk said quote ascribing it to white supremacy was bullshit.
There's no proof that he is a white supremacist. We
should not be ascribing things to white supremacy if it's false.
When he doubled down on this claim on Twitter, people
actually use Twitter's community notes feature to correct him, adding
a footnote to the post that said Texas police have
(15:20):
confirmed that Alan the mall shooter had neo Nazi tattoos
and beliefs, and he wore a pad signifying right wing
deav squad. The community note was deleted. We do not
know if Elon Musk had editing to do with that
that community note being deleted, but does something he would do,
we have to admit. So this is my take. Obviously,
the things that Elon Musk says are not true and
(15:42):
also horrible, right, So like, that's not what I'm giving
a perspective on here. My take is that we absolutely
need to talk about the fact that Elon Musk has
been amplifying and engaging with extremist right wing talking points,
and he's been doing it for a really long time
out in public. I think that for a while, like
it was kind of easy or maybe tempting for tech journalists,
(16:03):
even tech journalists that I really personally like and respect,
like Kara Swisher. I'm obsessed with Kara Swisher. She is
one of my idols. However, I do think that some
of the tech press really kind of let him off
the hook from what is what he is obviously doing
in plain sight like it, I think that people gave
him the benefit of the doubt. They assumed that maybe
something else was going on, like maybe Elon Musk was
(16:25):
just trolling or kind of just trying to be edgy
use edgy humor, or he doesn't actually believe the things
that he's talking about, or that maybe he's somehow just
trying to like hear out both sides, as if someone
like Elon Musk is far too smart to fall for
like a right wing extremist echo chamber. But this has
been going on in plain sight for kind of a
(16:46):
long time. Like remember when Nancy Pelosi's husband Paul Pelosi
was attacked by a man with a hammer Elon Musk
tweeted a piece from the fringe website called the Santa
Monica Observer, which has previously reported things like that Hillary
Clinton died at nine to eleven and replaced by a
body double. This piece falsely claimed that Paul Pelosi knew
his attacker and that they had had some kind of
(17:06):
a relationship. This is obviously not true, completely baseless. Pelosi's
attacker admitted that he broke into their home specifically to
attack Speaker Pelosi, but after Musk tweeted this, other extremists
amplified this baseless claim on Twitter, including Marjorie Taylor Green,
who defended Musk with a tweet that repeated the lie
that quote Paul Pelosi's friend attacked him with the hammer.
(17:27):
So Elon Musk, we just have to call it what
it is. He traffics in well worn conspiracy theories that
are oftentimes anti Semitic, right, so he has trafficked in
a well worn anti Semitic trope and talking about the
influential philanthropist George Soros, he tweeted, George Soros reminds me
of Magneto, which is obviously comparing Soros to the Marvel supervillain,
(17:49):
presumably because they're both Jewish Holocaust survivors. Just in case
you were confused about what he was trying to say,
he followed up with another tweet saying that George Soros
hates human Listen, Soros is an incredibly influential, connected philanthropist,
Like I think he's the biggest donor to the Democratic Party.
It is absolutely fine to criticize someone like George Soros's,
(18:12):
you know, agenda, perspective, whatever. However, there is absolutely a
way to do that that does not traffic in well
worn anti Semitic tropes, like you don't have to compare
him to a comic book supervillain. You don't have to
say that he hates humanity. You don't have to traffic
and all of these very well worn anti Semitic tropes
and stereotypes in order to criticize George Soros. And I
(18:36):
think the fact that Elon Musk continues to do so
is him telling us who he is. It's him showing
us who he is, where he stands, and what he believes,
and we got to take him for his word.
Speaker 3 (18:47):
Yeah, I think you're you're right, And you know, a
notable difference about his defense of the shooter is that
there is nothing like funny or jokey that in what
he was saying, right like the idea that oh, he's
just like kidding or trolling or you know, playing to
the to the trolls for funsies or whatever people might
(19:08):
say to let him off the hook. He was just
straight up defending a Nazi, right, Like, there's photos of
this guy's tattoos, and it's like he's got a swastika,
he's got the ss lighting bolls, Like dude was a
Nazi and Elon Musk is out there saying that he
wasn't just pure disinformation.
Speaker 1 (19:26):
Yeah, it's And I also think like when the Nancy
Pelosi thing happened, I don't remember who, I want to say,
it was like a major funder for MySpace going you know,
a bit of a throwback. But you know, he pointed
out that it's almost like Elon Musk doesn't understand or
misrejecting the responsibility that comes with being the CEO of
a platform, but that he thinks he's any other user,
(19:48):
he should be able to disci like tweet whatever he wants.
And that's you know, a complete fiction that like owning
this platform comes with a responsibility, and if you're not
able to responsibly and ethically handle that responsibilit then you
have no business leading a platform, and full stop. Jonathan Greenblatz,
the CEO of the Anti Defamation League, the civil rights
group that combat antisemitism, said that the way that Elon
(20:09):
Musk talks about George sorows will embolden extremists, and I
absolutely agree with him. Greenblat says Soro's often is held
up by the far right using anti Semitic tropes as
the source of the world's problems. To see Elon Musk,
regardless of his intent, feed this segment comparing him to
a Jewish supervillain, claiming that Soros hates Humanity's not just distressing,
it's dangerous. It will embolden extremists who already contrive anti
(20:33):
Jewish conspiracies and have tried to attack Soros and the
Jewish community as a result. I absolutely agree, and in
my opinion, it is really time to see Elon Musk
for who he is. If in May of twenty twenty three,
you are still giving this person the benefit of the doubt,
you are still saying that maybe he's joking, Maybe this
is some sort of like genius move that we're too
(20:53):
stupid to understand. It's time to let that go. Elon
Musk is telling us who he is. He has told
us who is time and time again. It is time
for us to believe him. You know, I think it
was Maya Angelou who said, when someone shows you who
they are, believe them. Elon Musk is showing us who
he is. It's time for all of us to believe him.
(21:13):
That this is you know, he's not just engaging with
this extremist garbage by a mistake. This is what he believes.
This is who he is. And the sooner that we
accept that, the better.
Speaker 2 (21:24):
Totally agree.
Speaker 6 (21:25):
Yeah more, after a quick break, let's get right back
into it.
Speaker 1 (21:45):
Okay, So one last kind of fun thing. Did you
see that weird? I mean, I don't know if you
follow Essence Magazine on Facebook, Mike, probably not, But did
you see this weird Facebook post that Essence Magazine put
out about Viola Davis.
Speaker 3 (21:59):
You sent it to me and I saw it, and
I was like, Wow, people are just like writing in
bizarre ways that I can't even understand anymore. I guess
I am out of touch with what the kids are doing.
Speaker 5 (22:10):
Okay.
Speaker 1 (22:10):
So basically it's this picture of Viola Davis at con.
She looks fantastic. She's wearing this like beautiful white gown,
just like, oh, it looks amazing. Ten or ten. So
Essence On Facebook, they wrote, I'm gonna read this verbatim.
We give credit when credit is due, and Villa Davis
is looking like eight point fifty. Everyone at con can
(22:30):
now pack your bags and return home. This is a eat,
This is an eight. This is supper. God damn, the
damn is broken and the flood of goodness has overflowed.
Ten out of ten. Okay, so this is clearly like
Essence magazine is a magazine that has a black woman readership, right,
and so this is obviously a post that is trying
(22:53):
to use like black woman slang like oh she ate
blah blah blah. I get that. However, I believe and
no one I have no proof, but no one will
be able to convince me otherwise. This is written by
AI somebody. Somebody has tried to train an AI model
to speak like a black woman and do black woman slang.
(23:15):
And I believe this, like, no one will ever be
able to convince me that that's not what this is.
I should say that this post has since been edited
and it's it reads much clearer. They definitely like have
polished it up. They've They've spelled Viola Davis's name correctly
in the edited version. But you know, maybe we don't
need to worry about AI taking all of our jobs
because this is what it looks like when AI tries
(23:36):
to do aave black slang.
Speaker 3 (23:38):
It trips them up every time. Right, we gotta you
should call it Shippika Hudson, your slip is showing back
in here.
Speaker 1 (23:44):
Yes, obviously racist word salad is what she called it. Okay,
So I'm gonna end on a little segment that I'm
hoping to return to, which is called is this happening
for everyone or happening just to me?
Speaker 5 (23:54):
So?
Speaker 1 (23:54):
Has anyone else on TikTok noticed segments of movies taking
over your for you page? It'll be just like a
juicy or climactic clip from a movie. So I can't
explain why, but I find these clips so deeply satisfying
to watch, even if they're movies that I haven't seen,
or even more when they're movies that I've seen one
hundred times. The last clip that came up for me
(24:17):
was that clip from the movie Selena, a movie that
I've seen no less than a dozen times, and it's
I know the movie by heart, where Selena is shopping
in the mall and the salesgirl is like rude to
her and her sister when she's shopping and she has
her like pretty woman moment. You know, we we don't
need the dress. So that segment came up on TikTok
and I watched the hell out of it, and I
(24:37):
was like, well, and I've seen this movie a million times,
I know exactly what's gonna happen, but I don't know.
There's just something deeply satisfying about these movie clips. I like,
there are a couple of movies that I have been
watching in like two minute segments on TikTok, So I
want to know is this just me? Is this happening
on your for you page? If you are on TikTok,
and if it is happening, do you find it as
(24:58):
satisfying as I do? Like? Why is it so satisfying?
What's going on? What's the brain chemistry behind why this
is so satisfying to watch?
Speaker 2 (25:05):
Yeah, it's a great question.
Speaker 3 (25:08):
You know, I don't spend a whole lot of time
on TikTok, but I know a lot of our listeners do. So,
you know, send an email at hello at tangoty dot com.
Speaker 2 (25:18):
How else can people share their their experiences?
Speaker 1 (25:22):
Well, you can send us an email, you can find
me on social, or you can subscribe to our patreon
at patreon dot com, slash tangody. That's t A N
G O T I, y'all. I promise this will not
be one of those podcasts where like every time you
listen it's like, subscribe to our patreon, subscribe to our patreon.
But it's new. I'm excited about it. We just got
(25:43):
our very first subscriber. Thank you, Karen Jay for you
for your support. But yeah, and if you want more
news analysis media, subscribe to the patreon and let us
know what you want to hear. Like, I want it
to be a resource that is helpful.
Speaker 5 (25:56):
Yeah.
Speaker 1 (25:57):
So I can't wait to hang out.
Speaker 5 (25:58):
With y'all there.
Speaker 1 (26:02):
If you're looking for ways to support the show, check
out our March store at tegoty dot com slash store.
Got a story about an interesting thing in tech, or
just want to say hi, You can reach us at
Hello at tangody dot com. You can also find transcripts
for today's episode at tengody dot com. There Are No
Girls on the Internet was created by me Bridget Tod.
It's a production of iHeartRadio and Unbossed Creative edited by
(26:23):
Joey pat Jonathan Strickland is our executive producer. Tari Harrison
is our producer and sound engineer. Michael Almado is our
contributing producer. I'm your host, Bridget Todd. If you want
to help us grow, rate and review us on Apple Podcasts.
For more podcasts from iHeartRadio, check out the iHeartRadio app,
Apple podcast, or wherever you get your podcasts.