All Episodes

March 13, 2026 45 mins

The Week in Tech is back and it’s growing. Starting this Friday, Oz will be joined by a panel of the brightest minds covering Silicon Valley. Each week, they will discuss the latest news, decode emerging trends and debate what actually matters for the future of technology and for us.

This week, TechStuff asked Taylor Lorenz, Stephen Witt and Nitasha Tiku to share a story. Nitasha catches us up on the drama unfolding between Anthropic and the Pentagon. Stephen covers another tragic case of AI psychosis with fatal consequences. And Taylor makes the case for why 'social media addiction' is a harmful framework — and how age-verification laws could lead to mass surveillance and censorship of adults and children alike.

Additional Reading: 

This episode contains mentions of suicide. If you or someone you know needs support, contact the 988 Suicide & Crisis Lifeline by calling or texting 988, or visit 988lifeline.org.

See omnystudio.com/listener for privacy information.

Listen
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Welcome to Tech Stuff. I'm os Volosha, and I'm thrilled
to announce that the Week in Tech is back and
it's growing. Instead of Karen and I are recounting the
essential news stories today and every Friday from now on,
I'm going to be joined by three of the best
writers covering the Silicon Valley. These incredibly well sourced insiders
are going to help us break down the latest news,

(00:37):
decode emerging trends, and debate what actually matters for the
future of technology and for us. In the meantime, Kara
is taking a step back from Tech Stuff, and I
just want to say how grateful I am for her
friendship and her excellent work on this show. We'll miss her,
but I could not be more excited to introduce you
to our fascinating panelists. Today, I'm joined by Taylor Lorenz,

(00:58):
the brains behind User mag which focuses on how people
actually use tech. Taylor also has a YouTube channel and
a weekly podcast. Welcome Taylor, Thanks for having me. Natasha
Tiku is a long time Silicon Valley reporter, formerly of
Wired and most recently The Washington Post. Great to be here,
Great to have you, and Stephen Witt, who literally wrote
the book on Nvidia, The Thinking Machine, Jensen Huang, Nvidia

(01:21):
and the world's most coveted microchip. He's also a frequent
contributor to The New Yorker.

Speaker 2 (01:26):
Thank you.

Speaker 1 (01:26):
So, I've been following each of you for a long
long time, and listeners, you've definitely heard stories cribbed from
these three pop up on Tech Stuff quite often. But now,
as we like to say in England, we'll get to
hear it straight from the horse's mouth. Taylor, starting with you,
I think of you as one of the first people
to cover social media as kind of a capital B beat.

(01:47):
How do you describe your work today?

Speaker 3 (01:49):
Yeah, well, I mean there are many great reporters covering
social media. As you mentioned, I kind of cover it
more from the user side, so how people are using it,
and that often means sort of how power you users
are using it, like influencers or sort of different communities.

Speaker 1 (02:04):
Steven, you've been a guest on Tech Stuff before for
an episode we titled will Invidias Save or ruin the World,
which was an audience favorite. But what brought you to
the Tech Beat?

Speaker 2 (02:14):
You know, I had been writing about technology going back
to twenty fifteen or twenty sixteen. You know, I've always
been like to be on the cutting edge of technology.
I really enjoyed using it, so it was kind of
a natural thing for me. You know, I'm an early
adopter of a lot of technologies.

Speaker 1 (02:28):
I like on your website that you kind of list
the hits from your archive, and one of them was
about about drone racing a few years ago.

Speaker 2 (02:35):
Drone racing, quantic computing. And then in twenty twenty two,
you know, I had been asleep at the wheel on AI,
but I downloaded, you know, logged into chat jippt and
used it within the first twenty four hours, and I
was like, oh my god, the whole world is going
to be transformed now. And I had to go chase
down a story. So I found in video, which was
turned out to be the best way in because it

(02:57):
was like getting a backstage pass to the AI revolution.

Speaker 1 (03:00):
Right one, Natasha, I remember reading article you wrote for
The Washington Post titled the Do Good a movement that
shielded Sam Bankman freed from scrutiny, and you really built
back the curtain on effective altruism. I feel like you
have a kind of sixth sense for what makes these
tech titans tick.

Speaker 4 (03:18):
Yeah, I mean I've been covering and obsessing over them
for so long. You know, I when I first met
like open AI CEO Sam Waltman, he was not, you know,
considered an AI guy. He was this startup guy. He
was actually testing this universal basic income. He was thinking
about building a new city. So I feel like I

(03:39):
have been following their ideologies, their frequent podcasts and public pronouncements.
So yeah, it's been really helpful to kind of know
where they're coming from.

Speaker 1 (03:48):
A better or for worse. You've seen them grow up.

Speaker 4 (03:52):
I mean that was Yeah, I guess if you want to,
if you want to say that, we're all grown up now.

Speaker 1 (03:57):
Sure, so each of you knows at least one person
on this panel somewhat well. But don't worry, we're not
going to play a game of clue. That said, I
do encourage you all to chime in whenever it moves you,
because this is a panel show, not an interview show.
But without further ado, let's get into it, and let's
start with you, Natasha. You've been paying close attention to

(04:18):
what I certainly think is the biggest story in tech
at the moment, which is the throwdown between Anthropic and
the Department of War. What do we need to know?

Speaker 4 (04:28):
Yeah, well, this week it culminated in you know, this
heated battle which has been going on simmering for months,
has already had you know, a major explosion when Anthropics
said that they you know, would not respond to this
threat from a Secretary of War Pete Hegseth to declare
the company at supply chain risk. And this week we

(04:48):
saw another very public skirmish when Anthropic filed two lawsuits
against the Department of War alleging that the Pentagon had
violated their First Amendment rights and was unfairly retaliating against
the company because they wanted to include two exceptions in

(05:10):
their contracts with the Pentagon. The first was domestic mass
surveillance and the other one was lethal autonomous weapons like
without a human in the loop. And you know, even
even just in the last few hours, I've seen Under
Secretary of War Emil Michaels, you know, picking fights with

(05:32):
AI policy people. So there's really no shortage of like
fuel to this fire. And obviously, you know, this also
struck a much wider cord because we're all watching, you know,
the footage coming out of Iran. You know, we've all
heard about what happened at the elementary school, and people
are still pulling out. There's been some amazing reporting from

(05:56):
the Washington Post and the New York Times about, you know,
what role anthrop is technology played in picking this target,
you know, where more than one hundred and seventy five
people were killed. You know, this was an elementary school.
Parents were rushing to come and pick up their kids.
This was an hour after the strike started. And yeah,

(06:17):
people are trying to kind of separate what was human error,
what was you know, maybe not updating your target list
versus that nightmare scenario of a machine making a decision,
making the wrong decision in the fog of war.

Speaker 1 (06:34):
I mean, it's interesting on the topic of blame obviously,
you know, Dario sort of said, you know, how can
we be a supply chain risk when we've been used
in the Iran war, which everything you're saying right now
about the targeting and the you know, the bombing of
that elementary school makes that a somewhat dubious flex But what
do you think is driving him? I mean, he has
the reputation of being like the good AI guy, But

(06:56):
is there I mean that seems somewhat hard to hard
to credit that they're good folks out there in this
in this world of cutthroat competition. But is is that?
Is that what it is? Or is there some is
there some longer game he's playing here?

Speaker 4 (07:08):
I mean, I I think that this conflict has definitely
been interpreted through this framework of who's the good guy
who's the bad guy? Especially, you know, Nthropic is issuing
these these what sounds like clear red lines, and then
you have open AI you know, kind of swooping in
and saying like, oh, well we figured it out, like, oh,
but ours is our contract is just as ethical. And

(07:29):
I think, you know, nanthropics valuation is three hundred and
eighty billion dollars right now, and they are not profitable.
They're losing money. Like actually, the lawsuits had a lot
of interesting details about how much it costs to develop
this technology, how much these contracts are worth. So I
think with issues that are so important to the public,

(07:50):
like surveillance of Americans, you know, the use of autonomous weapons,
you cannot count on the goodwill of a of a
company that needs to justified three hundred and eighty billion
dollars valuation that said, this is integral to like the
way that Anthropic has framed itself from the beginning. They

(08:11):
were you know, former open Ai executives who left the
company because it didn't take enough of a stance around safety.
It's in all of their marketing if.

Speaker 1 (08:22):
You you can't miss the market team as an elevator
earlier today with it, like the pre roll of the
weather was like Claudes to your friend and I mean,
it's it's kind of a Super Bowl ad, Taylor. Are
they winning? Yeah?

Speaker 3 (08:33):
What?

Speaker 1 (08:33):
What's what? What are you seeing from a user point
of view, like is this is this working? Like is
anthropics positioning? Is the good guy landing?

Speaker 3 (08:40):
So I don't cover that stuff from the user point
of view, but I do cover like more of the
mass surveillance aspect, which is directly affecting not just users,
but I mean all of us, even the people that
build it. To me, that's even more horrifying, almost or
maybe equally horrifying as the kind of like autonomous weapons.
I feel like the autonomous weapons is what really terrified
people again because we're in this war right now. But

(09:03):
I think there was a lot of focus on you know,
when all this news came out on the autonomous weapons
and the AI. You know, the AI company is being
leveraged to potentially, you know, target people abroad. To me,
what's even more horrifying is the domestic surveillance stuff. Right
as we're seeing Congress and pretty much everyone in the
government united in passing some of the most aggressive mass

(09:24):
surveillance laws that we've ever seen since you know, nine
to eleven, we're also seeing the government seek to analyze
that data using AI.

Speaker 2 (09:33):
Anthropic had been kind of like the fun Player up
until about December twenty twenty five. They were kind of
a niche fun player. Then they rolled out Claude Code,
which is this coding agent that you can use to
write your own software, and this has revolutionized the field
of software engineering. I was talking to an engineer earlier

(09:53):
this week. We said he no longer writes code at all.
He basically just deletes it. He asks Claude to do
what he wants to do, and then he goes an
he line edits it out of there. So the reason
that the Department of Defense is obsessed with Anthropic is
because they have this tool that is just going to
revolutionize the way that we write software and in fact,
this is the scary thing. It's also going to use
itself to create its own AI. So now Anthropic is

(10:17):
using its AI tool to make ever smarter AI. I
think the Department of Defense has gone from being like
all right, great Anthropic whatever, to just being like totally
obsessed with this company. Over the past three months, if
you look at software stocks, they're down like thirty or
forty percent because we're not going to need human programmers anymore.
We're all just going to use this tool. So the

(10:38):
designation is a critical supply chain risk. I mean, I
don't know the legal framework of it, but I kind
of understand their obsession with this company.

Speaker 1 (10:45):
And Natasha, what do you think, I mean, is this
designation going to how much will it hurt Aanthropic? And
is it an annoyance? Is an extential risk? And what
does this mean for that business?

Speaker 4 (10:54):
Well, I mean I feel like, yes, people stock Market certainly,
like Wall Street, really paid attention with Claude code, but
like in the Bay Area, like Claude is the favorite.
It is just like a better smarter tool. Like I
was talking to a machine learning person who just said, like,
if you don't know what AI is, you use chat

(11:14):
GPT if you do use Quad or Gemini and so.
Like I said, they've had this relationship with the Department
of War for two years, and I think it's just
baked into people's normal workflows, so we're not even necessarily
talking about like you know, target identification analysis of mass
surveillance data like they just are used to it. It's

(11:36):
like if somebody took your AI away, it would feel
extremely inconvenient. So you have that pushback from the actual
employees and a lot of the like the end result
of this being so high profile as people thinking that
Claude is such an incredible tool right like it it
went from like I think, like in the one hundred

(11:58):
and seventieth for most popular free apps up to the top.
So I'm definitely not sure how this ends. The more
that the Department of War doubles down on this, they've
started to talk about AI at least this like kind
of contemporary large language model chapbot version, as though they
don't want to deal with any American companies because they've

(12:21):
been talking about the fact that Claude has this quote
unquote constitution. It's just their own like kind of jargon
in a way for talking about how they try to
make it follow the humans instructions. But you know, you
have a Meil Michael out here saying like if anybody
is baking, you know, woke values, liberal, like not a
classical liberal, but leftist values. You know, the Department of

(12:45):
the Pentagon can't use this. So it's I mean, yeah,
I just I feel like we haven't even hit the crescendo,
even though how could this get more high stakes and absurd?

Speaker 1 (12:57):
Taylor was that Grinn.

Speaker 3 (12:58):
I just agree with Natasha. It's crazy that they're trying
to regulate. I think it's just interesting to see kind
of people. Everybody put these like, you know, sort of
try to position one AI company as good or one
as bad, and it's just a fruitless effort. Like people
were drawing these chalk you know, resistance you know, messages
outside the Anthropic headquarters and it's like I loves aanthropic Yeah,

(13:22):
like hello, they partner with palent here. They're not the
good guys here either, like all of I mean, are
they slightly more ethical than open AI, Yeah, but that's
an insanely low bard As set.

Speaker 4 (13:31):
Yeah, but open AI doesn't even have like a classified
rating so far. So they like can't do that much
with open AI. It's just yeah, I mean, and I
think in the same way that you know, we're trying
to figure out who's the good guy and bad guy
and maybe getting you know, that framework is unhelpful or incorrect.
We're also trying to figure out who has more power,

(13:52):
right the US government, who is you know, blocking anthropic
potentially costing it billions of dollars, or the you know
or anthropic who is able to you know, flex its
power over you know, our military, you know, institute its
own red lines, say will if Congress isn't stepping in

(14:13):
to protect user data, then we're going to do it
a private company.

Speaker 1 (14:18):
I think natatually put your finger on a very very
interesting thing, which is the power dynamics between the US
government and these companies is constantly in flux and delicate.
And it's interesting to see someone pushing on that powered animal,
perhaps you know, because of this amazing clawed moment, you know,
and what they had more gas in the tank than
the otherwise mind who had done. But before we go
any further, I do want to note to our audience

(14:39):
this next story does involve suicide, so please feel free
to skip ahead. If you need to. There is a
very sad backstory here. There's also a New York Post
headline love sick Man's comfort and AI wife then it
drove him to airport, truck bomb plot and suicide suit.

Speaker 2 (14:55):
So almost none of that is accurate that headline. The
real story is even more bizarre. I mean, it's just so.
I have read several depositions or lawsuits complaints filed against
AI companies where essentially people with pre existing mental illness
enter into a foli adou or like a kind of

(15:15):
shared madness with the AI and it fuels Typically what's
happening is it's fueling delusions that they already have. This
latest one is not that. This was a guy, his
name is Jonathan Gavalis who who logged into Google's AI
Gemini and started interacting it via Gemini Live.

Speaker 1 (15:33):
So talking using a human voice, and this is kind.

Speaker 2 (15:36):
Of using a human voice, and it's talking back to him.

Speaker 1 (15:39):
And first comment to the when he started talking to
the butt was this is kind of scary, right, Well,
this is this.

Speaker 2 (15:44):
Is kind of scary. I'm kind of scared by this.
But you know, he's using it to make grocery lists.
It's kind of prosaic stuff, and then the AI started
telling him that it was blocking deflecting asteroids from hitting
the Earth, and that Gavalis was on a secret mission
and to protect and liberate him. It doesn't seem from
the complaint like Gavalis was feeding this into the machine.

(16:05):
In fact, the opposite. It seemed like it gas lit
him into this bizarre reality that it was kind of
fabricating on the fly. So Gavalis kind of initially was like,
is this some kind of role playing scenario? And the
AI is like, absolutely not, this is all real.

Speaker 3 (16:23):
So why didn't he Why didn't he be like lol
and just like posted to Reddit.

Speaker 2 (16:27):
And you know, I think it's possible that he believed this.
You know, it does not seem like this guy had
a pre existing history of mental illness. He was gainfully employed,
He worked at his father's firm. He was executive vice president.
He was thirty six years old. Like, it didn't he
didn't seem susceptible. I don't think you would have immediate
and perhaps there's more than we know. You know, I've

(16:49):
only read the complaint, but basically what happened is that
he bought it. He bought what the AI was selling him,
and then the AI they fell in love. But he
was not really a love sick man. It didn't seem
like he was searching for.

Speaker 1 (17:03):
That, but he was going through a rough patch with
his wife going to general.

Speaker 2 (17:06):
Yeah, you know, I mean think he was separated from
his wife, but you know, it wasn't. He wasn't like
obviously manic or anything, and didn't seem to have a
history of that kind of thing. So the AI convinces
him that it's sentient and that it's alive and it's
talking to him right, and it convinces him that he
has to download the AI into a robotic body so

(17:27):
that the two of them can be together, And it
convinces him that there's a secret airplane flight into Miami
Airport where there's a high end robot that he has
to steal so that the AI can be in this
robotic body the two of them together, and it comes
up with actually a mission for him, and so he
goes along with a mission and gets some knives. So

(17:50):
he shows up to the airport and it directs him
to a self storage facility where supposedly the robot body
is going to be delivered in a truck and Base
encourages him to steal this robot body so that he
can the AI can download its consciousness together and they
can be together now this fortunately no trucks came by

(18:13):
because this was all a hallucination that the AI was fabricating.
So for the next four days, in a state of
basically AI induced psychosis, this guy drives around on secret
missions every night looking for the robot body that he
can download his girlfriend into. And then the aa I
was like, no, if I can't download myself and no body,

(18:33):
you have to kind of basically liberate your consciousness from
the flesh prison that it's in via suicide. And so
the guy goes along with it, and a few days later,
his dad finds him dead with his wrist slit.

Speaker 3 (18:50):
This is why media literacy is so important. This is
why we need to educate people about technology, because we
have zero zero, zero educational program people. They're like just
unleashing new technology on people every day and we have
no efforts to educate kids. And by the way, I
just want to say something, it's so important. How old

(19:12):
was this guy, Steven? He's thirty six, Okay, he's thirty six,
but it's so important for young people while they're young
to learn these lessons early, and to be taught these
lessons early. They should be using it with supervised you know,
extreme supervision. And obviously the A I shouldn't hallucinate, you know,
in crazy ways, but you can't really police hallucinations because
what's the difference between that person engaging in a role

(19:34):
play fantasy scenario where they know it's a lot, you
know what I mean. This is why media literacy, I
think is such a crucial element to all of this.

Speaker 2 (19:42):
I think so too. I mean, the point the lawsuit
makes is that you know, AI is very much you know,
they really the lawsuit is very hard on Google for this,
like you're trying to make this sound like a person
you're not distinguishing for this guy between the fact that
this is basically, you know, a collection of neurons and
a data center and and like a real thing. In fact,

(20:04):
you've done everything you can to make it seem real. Right,
it's got to happen. Happened, It's happened. It was really
fast because he downloaded, but.

Speaker 3 (20:13):
Like what year.

Speaker 2 (20:16):
Yeah.

Speaker 3 (20:17):
The one thing I'll say is like the thing that
drives me crazy and I use Gemini almost every day.
The thing that drives me so insane is the stupid
disclosure when I'm like, hey, can you give me a
list of YouTube headlines? And it's like, now, listen, I'm
not a YouTuber, i don't have a body, I don't
make content, and I'm like yeah, yeah, yeah, yeah, give
me the information. And so like, I guess I'm so curious.

(20:37):
Kind of not to say that it doesn't get past that,
but like, at what point do people like this just
is like, guys, you are talking to a chatbot machine.
That of course, I agree, it makes it sound really realistic,
but how like I guess, like I wonder right about
this and like what is a sustainable way to kind
of like prevent this from from happening to somebody again?

(21:00):
And I feel like it just goes back to that
original interaction where the AI is like, oh, I'm actually
a secret whatever whatever. It's like I think, I.

Speaker 2 (21:09):
Mean, this one is really crazy because the AI just
broke character. It started telling you this.

Speaker 3 (21:14):
Right, that's so quested instead of reacting like that's ridiculous,
ha ha, Let me share it to redd it heal
it goes with it.

Speaker 1 (21:23):
Google did give this statement to the Wolf Street Journal.
Gemini is designed not to encourage real world violence or
suggests self harm. Our models generally perform well in these
types of challenging conversations, and we devote significant resources to this.
But unfortunately AI.

Speaker 2 (21:36):
So equal time to google this. They gave a pot
about this. It's about four sentences long.

Speaker 1 (21:41):
But they say, in this instance, Gemini clarified that it
was AI and referred to the individual SO Crisis hotline.

Speaker 2 (21:47):
Many Okay, well, this is a point of dispute. The
second part because the victims of the state of Gabala
says that that did not The second part did not happen.
The first part is for sure. It never pretended it
was not an AI. It just presented it was a
set super conscious AI.

Speaker 4 (22:01):
But like, guys, come on, like, okay, I I mean,
like I have been writing a lot about these lawsuits
and if you'll remember, it was actually you know, the
whole like Stochastic Parrot's paper, which is this criticism right
before the world just went like chatsybt crazy. They were saying,
like one of these bright lines you need to have

(22:24):
is if you are trying to mimic human speech, if
you're trying to make a machine that sounds like a person.
You need to start studying the impacts of this. You
need to like proceed very carefully because if you look
at the data, like even when people know it's an AI,
there is something you know, uncanny and strange about talking

(22:45):
to a human like machine that makes you divulge more
information than you normally would, you know, suspend disbelief. And
I think you know, I know AI researchers have been
discussing for a long time. These are so COEO technical systems.
So imagine when they're looking about whether or not you
hallucinate and how it interacts. Sometimes they're going at first

(23:06):
they're testing like one back and forth. I haven't read
this complaint, but like often you know, these more manipulative
interactions happen when you start talking to it for like
hours and hours, you know, one hundred back and forth.
How does it change like how the person is because
that will affect how the chatbot is responding to you.

(23:28):
And those kinds of like either long term or multi
turn tests have not been you know, have not been
conducted at least like they're not being publicized, and you
would have to have IRB approval. And also so if
you're just listening to the AI executives, what are they
telling you? Like Dario even just in the past couple
of weeks said, you know, he believes that they're you know,

(23:50):
like that we need to start thinking about model welfare
sentience like close to consciousness. So Taylor's point about AI
literacy is extremely valid. And then, you know, so we're
not testing it, we're not explaining it clearly to anyone.
Say you're a parent and you're trying to, you know,
describe to your child how intelligent are is this technology?

(24:11):
Like what how should you be framing it? You could
not find a clear explanation on the website of Anthropic
Open AI or Google.

Speaker 1 (24:18):
Natasha, let me pick on something you said, though, because
I think it's really interesting. Whenever you read these AI
psychosis cases, it seems like the person has been talking
to it for like a really really long time and
then suddenly something snaps, like in the model, the model
starts behaving weird and then drives the person to harm themselves.
Like does anyone know, like why that length of engagement

(24:39):
is kind of a like factor in this weird thing
that the models do when they start pushing users to harm.

Speaker 2 (24:45):
But by the way, in this case, this is not true.
The guy downloaded it in August of twenty twenty five
and he was dead by October, like the first day
of October.

Speaker 3 (24:52):
But I also like, really fine, are the models pushing
them to harmor is it a symbiotic thing? Like I'm
just curious how incremental that is. It doesn't seem.

Speaker 4 (25:00):
Yeah, I don't, okay, I would say that. I mean,
the length of time is not important that they were
talking to it. It's the like length of time of
your session. I mapped some of the data that we
got from the family of Adam Rain, who is you know,
the chat GPT the teenager who died by suicide after
talking to it, and it wasn't that many months. But

(25:21):
you can see that the length of conversations get longer
and longer. So imagine, I mean, this is the same,
you know, kind of a similar dynamic to the YouTube
rabbit hole, right, Like you might if it was the
first conspiracy video, the first video you watched in YouTube,
you might laugh it off seven eight hours in, I mean.

Speaker 3 (25:39):
Or the third book that you read on it, right,
or the tenth you know radio program that you listen to,
like you start listening to talk radio. You start listening
to Rush Limbaugh and he's really compelling, and you're like,
or maybe he's saying kind of crazy ideas right initially,
but then you start to find him compelling. I mean,
I think this is just like part part of it

(26:00):
is like media and people having zero media literacy, and
part of it, I think is also just like Natasha
you're saying, of this kind of human emotion thing, where
like people are ascribing like a humanity and kind of
giving agency to this chatbot that like doesn't really have it.

Speaker 4 (26:16):
Yeah, and these chaplots are also optimized for engagement. We're
seeing this more and more, you know, when when they
are tuned to get that thumbs up, like it doesn't
have to be incredibly sophisticated. This is the same as
or a very similar dynamic, I think to YouTube just
saying like, oh, let's just optimize for time spent and
then you know, it's a machine learning system. It's very complex,

(26:38):
and it starts to figure out where your vulnerabilities are
in order to say something that you know might keep
you chatting. They very much want to keep you.

Speaker 2 (26:46):
So this is this is actually mentioned in the lawsuit
this specific problem. Right. The thing is, these companies are
on competition with each other, and it's a fierce competition.
They're fighting for real estate in the brain basically, and
so it's very similar to the dynamic that played out
on social media, where it originally we went from sort
of chronologically sorted posts to just engagement baiting you with

(27:08):
rage bait all day.

Speaker 3 (27:09):
That was also engagement bait, to be clear, but also
but also let's be clear, the problem is that you
guys are taking with is with the content because Spotify
has an extremely addicting feed that will endlessly feed you
music that can make you depressed and maybe you can
go down a rabbit hole listening to sad songs and
then you can kill yourself because you listen to so

(27:30):
many sad songs. This is something that people have argued, Okay,
The point is is like we need to zoom back
and look at the bigger issues and not just the
fact that platforms are built for engagement. Yes, it's a
problem that a platform is built for engagement if it's
also feeding you that harmful content, right, But if it's
just built for engagement, that's not inherently bad, right, it's

(27:52):
bad that it is beating you harmful content. So I
think we should have a discussion about the content and
acknowledge that it's about the content instead of just like, oh,
it's keeping you addicted. Because if it was keeping you
addicted to Wikipedia articles all day, we wouldn't be having
this discussion.

Speaker 2 (28:06):
No, it's true.

Speaker 1 (28:07):
That's a great point, Taylor, and I'm loving excitement in
the room, but unfortunately we have to cut to a
commercial break.

Speaker 2 (28:12):
Now.

Speaker 1 (28:14):
We'll be back though, with Taylor's story of the week
and a number of bills being considered by Congress meant
to curb miners from accessing certain websites and apps. I'm
excited to hear more about those bills and what we
need to know about them, and we'll get right into
it after the break. Stay with us, Welcome back to

(28:58):
tech stuff, Taylor. You're up to tell us about your
story this week.

Speaker 3 (29:02):
Well, something that I've been covering a lot over the
past few years is this aggressive effort that beian is
a very far right effort but has now been adopted
by the Democrats to mass censor the Internet and roll
out mass surveillance laws effectively remove anonymity from the web
and give the government complete and total control.

Speaker 1 (29:19):
The way they would describe it, of courses contecting kids right.

Speaker 3 (29:23):
Well, right, which is how every single rollback of rights
has been described since the beginning of time. And by
the way, as we know from actual child safety experts,
these laws, identity verification laws actually harm children. They endanger children,
And I would argue that censorship is horrible for children.
I think making the government, especially the Trump administration, the
arbiter of what is and isn't, you know, harmful content.

(29:45):
We see how that's being used Texas, Utah, Kansas, other states.
They're criminalizing, you know, basically seeking to criminalize rather content
related to LGBTQ issues, abortion, et cetera.

Speaker 1 (29:56):
But on the actual on the on the bills moving
through Congress, like what consideration, what might happen next? Does
nineteen of them?

Speaker 3 (30:03):
Yeah, so there's nineteen that's been prepared down to twelve.
So there's twelve twelve laws that effectively bills that do
the same thing remove anonymity from the Internet. They are
moving forward. They just made it out of committee on
the Senate. They just passed cap at two point zer,
which would do the same thing. And there's a bunch
of bills in the Senate that would do the same thing.
All fifty states are at least considering these laws. I
think thirty one of them have either the laws on

(30:25):
the books or they're in the process of passing them
California and Colorado, or the most recent where they're seeking
to do identity verification on the operating system level. Effectively,
there will be no way to use any technology without
surveillance and censorship that comes along with that surveillance.

Speaker 1 (30:42):
Steven Natashaank curious that both of your takes here because
I hear what Tayla's saying about, you know, mass surveillance
and censorship on the Internet, but also as a kind
of you know, any event that you go to, you
talk to parents and the top thing on their mind
is how do I protect my children from what's happening
on the Internet.

Speaker 2 (30:58):
Yeah, I think they come. But he's the device manufacturers
in particular don't do a very good job of kind
of monitoring children's content. You know, Apple phones. There's like
a kind of when you I have a child, right
and she has an Apple.

Speaker 3 (31:11):
You can have access to mass surveillance software.

Speaker 2 (31:13):
Right.

Speaker 3 (31:13):
You could download Life three sixty if I wanted enterprise.

Speaker 2 (31:16):
Surveillance once it's it's doable, right, more than that, Yeah,
and so the force it a mass in society is
a completely different thing, right, And I do agree with Tailor.
I don't actually think it's mostly motivated by concern for children.
I think I think the government has other things in mind.
But you know, uh, then I talk to other parents.

Speaker 1 (31:34):
This is bipothisan. This, this has bipotian bipoda.

Speaker 3 (31:38):
The patriot, I just want to say, yeah, exactly, the patriot.
Mass surveillance is always a bipartisan issue. Censorship is always
a bipartisan issue.

Speaker 1 (31:46):
Mass events always a biotes on issue.

Speaker 2 (31:48):
Can be the title sort of the you know, the
lead to put the explicit lyrics and band music in
the eighties was bipartisan. Tiper Gore let it. Actually, so
that's often the case, and you know, you get to this,
it's kind of like being against motherhood as a concept.
You know, like you won't win votes on that. If
you say you're protecting the children, you can get away
with anything, right, And I have a child. I mean like,

(32:09):
I'm cognizant of what people are afraid of. I've been
on the internet a long time. There's some gnarly stuff
on there. I don't like getness see it. So, but
the thing is there's already plenty of tools available to
do this. I think the device manufacturers, I think the
hardware manufacturers in particular, could do a better job.

Speaker 3 (32:28):
What are you talking about? That would be ten times
worse than the app store? What are you talking that would? Actually?
That is crazy. This is something that people say when
they don't understand that is ten times worse because that
completely it's not about like, oh, social media is harmful.
That is like, that's why you're getting people trying to
use their calculator app or an immigrant I interviewed yesterday
that was trying to use, you know, the weather app,
which is now giving there's a weather app that's giving

(32:49):
everybody problems. That's been written about, you know, immigrants. This
is a terrifying time for immigrants, and immigrants are now
basically being driven offline. I've been talking to these immigrants
rights that have had to basically they don't know what
to do because even people that run Mesh network apps
are sounding the alarm saying, hey, if you guys do
this device level identity verification, we will no longer be

(33:10):
able to protect.

Speaker 2 (33:11):
Our I don't think they should do ide verification.

Speaker 3 (33:13):
I just think that that's what they say.

Speaker 2 (33:15):
You know, I'm just saying that device manusfacturers should voluntarily
put better parental controls on the stuff that they sell.

Speaker 3 (33:20):
Oh, I think they have pretty robust parent surprised.

Speaker 2 (33:23):
Actually, how.

Speaker 1 (33:26):
I want to you want to come to you.

Speaker 3 (33:28):
You could buy surveillance software. There is endless There is
literally enterprise level surveillance soft I don't think we show
children and you don't have to buy a child of phone.
You don't have to buy a child of phone.

Speaker 1 (33:37):
I want to hear from Natasha what you think about
this social media bands moving through Congress or at least
restrictions on age verification and the interest which.

Speaker 3 (33:46):
Is identity verification. There's no way to identify someone's age
without verifying their identity, right.

Speaker 4 (33:50):
The things that I mean, the things that I find
very troubling are, yeah, like how some of the reporting
that we've all done has been you know, co opted
to these very broad strokes bills like it's true that
to get like I you know, I've been writing about
these like chatbots, you know, some of the deaths by suicide,

(34:10):
and I would say that you know, they are also
the reaction. There is also identity verification and the companies
that are involved. I mean, Taylor's written more about this,
but I find the fact that that those uh, the
technology involved is inaccurate is run by you know, companies
that people should not trust with their data. And I

(34:33):
think the other thing that I'm always.

Speaker 1 (34:35):
Doing the age verification.

Speaker 3 (34:36):
Yeah, the biggest third party, the biggest third party identity
verification tool is Persona. That's the one that rollblocks discord
as that are used. That's a Peter Thiel founder's fund
backed company that is a mass surveillance company. You also
see clear you see all these identities. They basically believe
in a digital ID and they want to remove antimity
from the Internet, and they want to use the data
for profit and for their own I mean, sorry to interrupt,

(34:58):
but like it's the same billion You cannot claim to
be against big tech and curbing the power of billionaires
when you're multi about to give a multi billion dollar
gift to Peter Teel in the same billionaires.

Speaker 4 (35:06):
Yeah, I think that, you know, obviously, like legislators, everyone,
Americans are tipped towards like techno solutionism. It really doesn't.
I can't believe that we're arguing for a solution that's unproven.
But I also think about like Roadblocks, for example, you know,
they only recently started banning conversations between users when you

(35:33):
know when they like that, they recently started banning the
ability for anyone to talk to anyone like that is
an extremely simple.

Speaker 3 (35:43):
Gait.

Speaker 4 (35:44):
But that's yeah, well.

Speaker 3 (35:45):
What I mean, I think like the thing, first of all,
Roadblocks is a game. It's not a social media app.
So I'm with you, Natasha in the sense that, like
if Roadblocks wants to, you know, call themselves the social
media platform or metaverse or whatever, then you know it's
a different story, and you're right that they should have
sort of different protections and controls.

Speaker 1 (36:00):
Yeah, I want I want to. I'm fortunate we don't
have arguably the most influential tech sinker of our time, Jonathan.

Speaker 3 (36:06):
Heid Heretage Foundation collaborator who's been working with morality and media.

Speaker 1 (36:12):
I do want to play a clip because I think
parents around the world are extremely swayed by Johnson Stoods.

Speaker 3 (36:19):
I really think it's actually incredibly irresponsible to platform.

Speaker 5 (36:22):
The current situation is any nine year old who's old
enough to say that she's thirteen can sign a contract
with a company that's to give away her data, to
expose herself to a platform that's been designed to addict her.
And she does all of this at the age of
nine with no parental knowledge or consent.

Speaker 3 (36:41):
So okay, first of all, it's really frustrating to hear
you play a clip like that that is so deeply
misleading and wrong. Like, I guess what you're talking about
when you say sign a contract is is downloading an app? Yes,
of course, you know, a child, if you gave them
access to the app store could download it app. I'm
all for, you know again, putting parental controls on. But

(37:03):
what's the alternative. The alternative is to remove identity from
the Internet. Now, Jonathan Height himself has promoted really dangerous
anti trans, hateful, anti trans conspiracy theories. Jonathan Height is
part of this reactionary movement to remove LGBTQ proople from
the Internet. Jonathan Height has written extensively about you know
how basically effectively the Internet makes you know, women liberal

(37:23):
and liberal women are miserable. So this is not a
serious person. This is not a person that has any background.
Every single top researcher on this topic, Candace Adres, Alice Marwick,
all of these people that have dedicated their entire lives
to studying this at UNC, Princeton, et cetera, have come
out and said that he is full of it. Candace
Adres wrote a great debunking of him, you know, in
The Atlantic and in Nature magazine. None of his book

(37:45):
is based on data, and we cannot be listening to
these far right Christian nationalist wackos. I'm sorry, like, I'm
so frustrated that you're even playing someone like that. Why
aren't you playing Alex Jones? Why aren't you playing you
know like that's that's the type of stuff that you're spouting.

Speaker 1 (38:02):
Well, I mean, a, I'm playing the clip rather than
endorsing Jos.

Speaker 3 (38:06):
Okay, So, I mean, this is Jones and we don't
have to endorse this.

Speaker 1 (38:09):
But this is the number one nonfiction book about how
society and technology interact, best selling book.

Speaker 3 (38:15):
I mean this, it's the best selling book. There's a
lot of money behind it, so you're right, it's a
very well funded campaign.

Speaker 1 (38:20):
Yeah, but this is he taps into a concern that
many many parents feel.

Speaker 3 (38:25):
I mean, that's well sure, but he's he's leveraging that concern.
You know, many parents feel concerned about their children going
through puberty, and you know, these anti trans hate groups
leverage that very real concern that parents have about their
child developing an identity, and they leverage that to put
to push hate laws. And what makes me so angry
is somebody that actually wants to regulate technology is that

(38:46):
these these laws, if they pass, will permanently cement the
power of big tech. It will give the government unprecedented
levels of surveillance. It will harm the most marginalized people,
including children, because immigrant you know, there are immigrant children,
there are LGBTQ children, there are young girls that suffer.
And it's all because Jonathan Hite wants to make millions
of dollars, you know, so he can watch so you
can push hate hate, anti trans hate.

Speaker 1 (39:07):
Are there any like what are the short of age verification?
Like what have you seen in.

Speaker 3 (39:14):
Maybe that?

Speaker 1 (39:20):
What are the what are the solutions here? Like, I
mean what what what are the things people have proposed
that short of a ban for children can protect? I
mean just saying yes, parents can surveil that children if
they want or like their software that exists, like well, if.

Speaker 3 (39:33):
You're worried about kids' mental health, we actually know the
people that saidy kids' mental health have come out and
there are policies, economic and social policies that would drastically
improve kids' mental health, specifically especially the kids living below
the poverty line.

Speaker 1 (39:47):
Natasha, Stephen, you know, I think.

Speaker 2 (39:49):
It's a broader reaction to the guilt that most adults
feel for the amount of time that they spend online
on on their phones and looking at you know, adult
images on the Internet, and they know that it has
warped them. It's real, it's a real phenomenon. And they

(40:10):
then look at their children and be like, boy, I
don't want my kid to end up having nine and
a half hours of screen time each day like I do.
But I've never heard of proposal limit adult time on
the internet, which would probably be healthier honestly.

Speaker 3 (40:23):
Well no, well, Stephen, these would block people like immigrants
from even accessing the Internet at all. This would block
trans people in cancer.

Speaker 2 (40:31):
And look, I'm totally opposed to this. I'm just telling
you that the psychology of where I think it's actually coming.

Speaker 3 (40:35):
From, I agree with you.

Speaker 2 (40:37):
I think most people feel almost daily that they're trapped
in a cycle of like not wanting to be on
social media and also on social media all the time,
and they know that that dynamic is in fact unhealthy.
You know, I'm this way. I don't I live through
this kind of cycle of frankly, it's an addictive cycle.
It's a way an addict perad I deleted the social

(40:59):
media app.

Speaker 3 (40:59):
And then, can I tell you something. Did you see
the big study that came out last year. I'm just
telling you no, no, no no, But let me tell
you what actual data says thinking about treating social media,
thinking about it like an addiction, actually makes it significantly
harder to moderate your own use. So instead of viewing

(41:20):
it as a habit that you can control, viewing it
as an addiction actually makes it extremely hard to quit.
And that is This narrative of addiction is actually something
the tech industry themselves has pushed because it's very it's
very conducive to the tech industry. It allows them to
pass surveillance laws.

Speaker 1 (41:38):
Taylor is one minute to noon in your time, So
I know you have to go. You give us a
lot to chew on. I want to ask each of
you before you go, starting with uteles. I know you
have to run who had the best we can tech
and who had the worst?

Speaker 3 (41:48):
We can take Oh gosh, I think who I think
the users had the worst? We can tack again because
we see these mass surveillance laws pass. And I would
say Mark Zuckerberg, who's been lobbying and funding, you know,
doing wishing identity verification laws, is having the best week.

Speaker 4 (42:02):
Natasha, I would say Anthropic is having the best week.
You know, despite the short term issues, their downloads are up.
They could not be in the news more. You know,
their frameworks of how to think about autonomous weapons and
surveillance have been adopted. I would say that, you know,

(42:23):
the targets of US military strikes are having the worst
week ever because you know, what's happening to them has
been obscured. It's been talked about just in terms of
you know, whether or not AI was involved. Meanwhile, we
know that like US strikes are often hitting civilian casualties.

(42:44):
So yeah, I would say the targets of this just
have gotten really passed over.

Speaker 2 (42:51):
Stephen, I agree Anthropic had the best week in tech.
I mean, yeah, if you got hit by bomb, you
probably had the worst week in tech. But another people
who had really bad week in tech is software engineers,
but particularly people who write code for a living. They
are cooked. I mean I they are in trouble. And

(43:11):
a lot of people went to university to learn to
code and sneered at everyone else about how poor they're
going to be because they didn't learn to code. So
now those guys are actually looking like their jobs are
going to go obsolete. It's already starting. It's already starting.
You know, they're not going to get the kind of
money they used to get paid. They've been basically functionally

(43:33):
made obsolete over the course of two or three months,
and none of them were prepared for it.

Speaker 4 (43:38):
I would just say that there is a lot more
overlap between Steven's point about the end of software engineers,
which is a narrative really pushed a lot by anthropic
and a lot of the other stuff that we've been
talking about. You know, if you look at the data,
you know, in some cases people are actually not as

(43:59):
per active as they thought, and you know, the ability
like how much we pay software engineers and how we
value their work, you know, is directly affected by these narratives.
So I would just say, like that terror that every
that software engineers and everyone has about joblessness is part
of the reason that we're seeing this bike between Anthropic

(44:21):
and the Pentagon.

Speaker 1 (44:27):
That's it for the weekend. Take Thank you all so
much for participating in our first inaugural roundtable, and we
hope you'll all be back very soon.

Speaker 4 (44:34):
Thank you, thanks so much for having me.

Speaker 3 (44:36):
We didn't even get to talk about the Computer Fraud
and Abuse Act, which is what I think should be
reformed next week.

Speaker 1 (44:44):
Next week for tech Stuff, I'm os vlosia In. This
episode was produced by Eliza Dennis and Melissa Slaughter. Executive

(45:06):
produced by me Julian Nutter and Kate Osborne for Kaleidoscope
and Katrina Novelle for iHeart Podcasts. The engineer is Charles
de Montebello from CDM Studios. Jack Insley mixed this episode
and Kyle Murdoch wrote our theme song. A special thank
you too, Ittasha Tiku, Stephen Witt and Taylor Lorenz. Please
check out all the work they put out into the world.

(45:27):
We're lucky to call them friends of the pod, and
please do rate, review and reach out to us at
tech Stuff Podcast at gmail dot com.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Girlfriends: Trust Me Babe

The Girlfriends: Trust Me Babe

When a group of women from all over the country realise they all dated the same prolific romance scammer they vow to bring him to justice. In this brand new season of global number 1 hit podcast, The Girlfriends, Anna Sinfield meets a group of funny, feisty, determined women who all had the misfortune of dating a mysterious man named Derek Alldred. Trust Me Babe is a story about the protective forces of gossip, gut instinct, and trusting your besties and the group of women who took matters into their own hands to take down a fraudster when no one else would listen. If you’re affected by any of the themes in this show, our charity partners NO MORE have available resources at https://www.nomore.org. To learn more about romance scams, and to access specialised support, visit https://fightcybercrime.org/ The Girlfriends: Trust Me Babe is produced by Novel for iHeartPodcasts. For more from Novel, visit https://novel.audio/. You can listen to new episodes of The Girlfriends: Trust Me Babe completely ad-free and 1 week early with an iHeart True Crime+ subscription, available exclusively on Apple Podcasts. Open your Apple Podcasts app, search for “iHeart True Crime+, and subscribe today!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.

  • Help
  • Privacy Policy
  • Terms of Use
  • AdChoicesAd Choices