All Episodes

September 12, 2025 30 mins

What does getting older actually feel like? This week, Oz and Karah discuss MIT Researchers who are using technology to simulate aging. Then, Oz tells the story of an activist who used China’s surveillance state as a form of protest. Karah dives deep into the weird world of humans quietly training chatbots. And finally, on Chat and Me, is replacing your therapist with ChatGPT a good idea? 

 

Also, we want to hear from you: If you’ve used a chatbot in a surprising or delightful (or deranged) way, send us a 1–2 minute voice note at techstuffpodcast@gmail.com.

Sources:

My Day as an 80-Year-Old. What an Age-Simulation Suit Taught Me.

She Sacrificed Her Youth to Get the Tech Bros to Grow Up

A Hidden Camera Protest Turned the Tables on China’s Surveillance State

Inside the lucrative, surreal, and disturbing world of AI trainers

Sam Altman, Tim Cook, and other tech leaders lauded Trump at a White House AI dinner

“First of its kind” AI settlement: Anthropic to pay authors $1.5 billion

AirPods Pro 3 arrive with heart-rate sensing and live translation using Apple Intelligence

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:13):
From Kaleidoscope and iHeart podcasts. This is tech stuff. I'm
as Voloscian and.

Speaker 2 (00:18):
I'm care Price.

Speaker 1 (00:19):
Today we get into what it's like to be a
human who trains AI for a living, and the story
of an activist who turned the surveillance state against the
Chinese government. Then on chatting me, a real life therapist
talks about how he uses AI.

Speaker 3 (00:36):
It felt like a steady partner. But I call a
cognitive prosthesis, not thinking for me, but helping me organize
and extend my own thinking. The way of walking, stick
supportune as.

Speaker 1 (00:48):
You walk, all of that. On the Weekend Tech, It's Friday, September.

Speaker 2 (00:52):
Twelfth, Hi Cara, Hi ahs.

Speaker 1 (01:02):
So normally we record these together in studio, but this
week I'm in a hotel room in Munich, Germany, at
a conference called DLD Digital Life Design.

Speaker 2 (01:11):
It sounds very lively. How is it?

Speaker 1 (01:15):
It's good? You know, are you familiar with this concept
of ambient AI that they're using medical settings?

Speaker 2 (01:19):
Sort of yes? Can you just tell me a little
bit about it?

Speaker 1 (01:22):
So basically, it's like cameras and sound recording in settings
where like a person doesn't have to go and use AI.
AI is just running in the background and feeding back
kind of insights and data. And so there's a talk
yesterday about this being used in hotel settings for food,
which I found pretty interesting. But apparently using this technology

(01:43):
has in some cases reduced food waste in hotels by
an average of forty percent.

Speaker 2 (01:47):
What does ambient AI look like in this case, Like,
is it like a panopticon or people getting publicly shamed?

Speaker 1 (01:54):
Well, it is a bit like a panopticon, but the
customers are not being publicly shamed on their third trip
to the buffet. Basically, there are cameras and there's also
a weighing device for trash bins, and so it will
track the food at the end of the buffet being
transported by the servers to the trash and thrown out
and then being weighed. And the insights and not rocket science.

(02:16):
It's like, hey, guys, don't fill the bread basket fifteen
minutes before the service ends. Like things that once you
hear them, are obvious, but actually it's interesting that like
being told it by technology actually apparently makes a real difference.

Speaker 2 (02:29):
I remember when you and I many now many years
ago went to speak at Johns Hopkins. They told us
about connected devices that nurses wear to make hospital work
more efficient. So I think this is something that we
should pay attention to. But speaking of digital life design,
I don't know if you saw this article in the
Wall Street Journal called My Day as an eighty year old?

(02:49):
What an age simulation suit?

Speaker 1 (02:51):
Top me? Did you see this? I saw the email
and I was fascinated, But tell me more.

Speaker 2 (02:54):
Basically, MIT has created this age simulation suit, which is
a full body suit that was built by scientists at
the Age Lab, which does research on how to improve
life for the elderly. So the journalists who wrote the article,
her name is Amy, doctor Marcus, and she recently wore it,
and that's what you know she wrote this article about.
And she said she was fitted with just imagine this,

(03:15):
a fifteen pound weighted vest, more weights on her ankles
and wrists to kind of stimulate the loss of muscle mass.
And the suit also had this very elaborate bungee cord
system and a neck collar that limited her mobility. She
also had glasses. And there's great photos you if people
who are listening want to look it up on the

(03:36):
Wall Street Journal. There's these great photos of like what
her distorted vision looks like she also wore padded crocs
that made it really hard to balance.

Speaker 1 (03:44):
This sounds like hell.

Speaker 2 (03:45):
It sounds like the worst hangover of your life. Basically, yeah,
MIT calls the suit agnes, which is an acronym for
age gain now empathy system.

Speaker 1 (03:55):
That's about acronym. It sounds when I a bachronym that
I but I get it. What made the story stand
out to you?

Speaker 2 (04:02):
So you know, I think it's a very interesting lesson
in empathy, or a lesson in schlepping if you come
from my culture. But it's actually a part of a
longer arc of product research for older people, which I'm
very interested in as someone who has an aging parent
who is very young its spirit. I think it's a
real test of our empathy to understand how people who

(04:24):
are older than us navigate the world. And there was
this woman who I became obsessed with many years ago
named Patricia Moore. Patty Moore? Have you ever heard of her?

Speaker 1 (04:32):
I haven't.

Speaker 2 (04:33):
So she is most famous for pioneering an approach called
universal design, which basically means designing products and spaces with
empathy for the widest possible audience in mind. So like,
will this refrigerator be something that a ninety year old
can open, for example.

Speaker 1 (04:50):
Yeah, it's interesting, and I guess it's kind of become
a bit of a standard now. I'm not sure if
companies always do it, but certainly companies always talk about
the idea they don't want to design products with the
healthy mid thirties product designer in mind as the user,
but they won't have a wider view on how these
products might be used by people different ageas people with
different abilities, that kind of thing.

Speaker 2 (05:12):
To your point, it wasn't always a consideration. And when
Patty was twenty six years old, this is in nineteen
seventy nine. I'm giving away her age, and I know
she's going to listen to this because she's a friend
of mine.

Speaker 1 (05:22):
But so she's been at this. She's been at this
since way before it was popular.

Speaker 2 (05:27):
She worked at Raymond Lowie. There was some grant from
the Nixon administration that allowed her to have the job
that she had, But she was one of the only
people who was bringing up any kind of conversation about
product design for the elderly and how we can engage
in a conversation about that. But what she did that

(05:48):
is so kind of different. But ingenius visa vi agnes Is.
She got a friend of hers who was a makeup
artist from Saturday Night Live, to turn her into an
eighty five year old woman. She traveled around the country
in these outfits essentially, and she got a really good
sense of what it was like to live as an

(06:10):
elderly person in the world in the late nineteen seventies.
After she did this, she went on to be a
product engineer and she has based her entire career on
creating more accessible products, like you know, good grips. She
was a very sort of integral part of the design
of easily grippable devices for the kitchen and the home.
She also worked on transit systems, medical devices, and now

(06:32):
she actually spends a lot of time consulting tech companies
on how to make their products more inclusive for elderly people.

Speaker 1 (06:39):
To me, it's interesting tech then and now story because
in the seventies, like to do this kind of research,
you'd have to invent your own costumes and characters and
process and go to a TV show to help you
achieve it. And now MIT has this age scented lab
where you can just put on the agnes suit for
a day, but of course, the motivation is the same,
and part of that might speak to the fact that,

(06:59):
howe him, we want to pay lip service the idea
of designing for all kinds of different people, it remains
very hard to pull off.

Speaker 2 (07:06):
It does. I think the one thing that's interesting about
the Angle and the Journal piece is that the Age
Lab is trying to also understand how to prepare people
who are aging into their eighties to live better. And
it's really about how do you keep people staying young
even though they're not young anymore?

Speaker 1 (07:25):
Right, right, right, Well, it's all only either beholder. Do
you know two people who are very interested in staying
young even if they age in terms of their physical bodies?

Speaker 2 (07:36):
Every woman I know, and many.

Speaker 1 (07:41):
But now in this case, I'm talking about Vladimir Putin
and Shechinping. The two of them are attending a military
parade in Beijing, officially to celebrate the eightieth anniversary of
the end of World War II. Unofficially, of course, to
show the US and Taiwan. Then you ask some of
high tech weaponry. But anyway, Putin and she were walking

(08:04):
together when a hot might caught them having a really
bizarre conversation. I'll read it to you. So Putin's interpreter
says in Chinese, quote, biotechnology is continuously developing. Then he adds,
quote human organs can be continuously transplanted. The longer you live,

(08:25):
the younger you become, and you can even achieve immortality.
She responds, quote some predict that in this century humans
may live to one hundred and fifty years old.

Speaker 2 (08:37):
Just imagine that your small talk is about organ transplantation.

Speaker 1 (08:41):
Continuous organ transplantation. I mean once once, you might think
will be enough.

Speaker 2 (08:46):
It's unbelievable. And that's their casual conversation, the most powerful
people in the world.

Speaker 1 (08:53):
The reason I came across this is because I'm quite
interested in defense tech and I was looking for the
most interesting angle on this military parade in Beijing to
bring to the show. And there was fascinating stuff about
new anti drone technologies, new kinds of lasers, et cetera,
et cetera. But actually the most interesting story I've found
had nothing to do with the military parade. In fact,

(09:15):
it was all about what happened the night before.

Speaker 2 (09:18):
Tell me about it.

Speaker 1 (09:19):
So, around ten pm last Friday night in chong Qing,
a huge projection against the skyscraper went up calling for
an end to the Communist Party's rule. It included slogans
like quote only without the Communist Party, can there be
a new China? End quote, no more lies, we want
the truth, no more slavery, we want freedom.

Speaker 2 (09:39):
And how long did that stay up? Like ten seconds?

Speaker 1 (09:42):
Well an hour? But actually this is where things get
more interesting and more tech stuff. A few hours after
the projection was shut off, the activist who staged the protest,
See Hoong, posted this video showing five police officers bursting
into the hotel room where the projector was and then
kind of bumbling around trying to turn it off. That

(10:02):
at one point one of the policemen notice, smile your
on candid camera. There's actually a camera pointing at them,
and he rushes towards it, looking really surprised. And then
he looks down and sees as a note underneath the
camera saying, even if you are a beneficiary of the
system today, one day you will inevitably become a victim
on this land, so please treat the people with kindness.

(10:24):
But it plays out almost like a Marx Brothers film.
They burst into the room, they're confused, and they realize
that they're kind of the victims of this prank.

Speaker 2 (10:31):
Did he get in trouble for this.

Speaker 1 (10:33):
Well, he would have gone into terrible trouble, but of
course he wasn't in the hotel room. That was the
whole joke of just a camera. He'd fed the country
with his family a week before the protest was turned on,
and in fact he turned it on remotely. But one
of his brothers along with a friend of his were
both detained, and authorities also questioned his elderly mother outside
her home. Now talk about Agnes elderly mother. I mean,

(10:56):
this woman looks like she's one hundred years old, completely
hunched over, surrounded by cops. Is an extremely unfavorable image
for the police. And see Hong again actually managed to
get hold of these surveillance photos. Then he posted the
images of his mother's interrogation online as well.

Speaker 2 (11:11):
This is very who watches the watchman irl? You know,
he's using the Chinese authority's own tools essentially against them.

Speaker 1 (11:18):
That's he's actually right. And Seahong himself was quoted in
the story saying, the party installs surveillance cameras to watch us.
I thought I could use the same method to watch them, you.

Speaker 2 (11:28):
Know, I think we think of huge tech systems as
fully autonomous, but a story like this really reveals how
a single human being can still have an outsized effect.
And you know, what this activist did was essentially reveal
that there is no such thing as a fully autonomous system.
And I think the footage of the police arriving in
the empty apartment and displaying this very sort of human

(11:50):
confusion as a testament to that which humans being inside
the lubnol brings me to the big story that I
want to bring to you this week, and it's do
you know about the people who train AI models to
be better?

Speaker 1 (12:02):
I actually do, because I have a friend who I
actually asked if they would come on the show some
weeks ago, and they said no because they have NDAs
and they rely on their AI trainer work for part
of their income, so they couldn't do it. So now
you brought this story because I've wanted to know more
about it for some time.

Speaker 2 (12:19):
So that's exactly what this Business Insider article is about.
They spoke to over sixty people who are basically the
humans behind the AI boom. They're called data labelers, and
there are hundreds of thousands of them around the world.
You'll remember, you know, we used to talk on Sleepwalkers
about content moderation, which was a sort of similarly arduous,

(12:39):
sometimes disturbing task. These people do exactly what your friend does,
which is that they spend hours and hours feeding chatbots
test prompts, and then categorizing their responses according to whether
they're helpful, accurate, concise, natural sounding, or wrong, rambling, robotic,
and sometimes even offensive. In the Business Insider article, they

(13:03):
are described as quote part speech pathologists, part manners tutors,
and part debate coaches.

Speaker 1 (13:09):
That teachers in an elite prep school.

Speaker 2 (13:13):
Exactly.

Speaker 1 (13:14):
I never asked my friend because I thought it was indelicate,
But I'm curious what kind of money do these people make.

Speaker 2 (13:20):
So one of the companies that oversees these trainers is
called Outlier, and they actually said that they've collectively paid
and this is a quote, hundreds of millions of dollars
in the past year alone for data labelers. One of
the labelers quoted in the article actually made fifty thousand
dollars in six months at one point, which is a
decent amount of money.

Speaker 1 (13:41):
Ten thousand dollars a month almost it's not bad. I
mean how much data labeling do you have to do
to make fifty thousand dollars and six months.

Speaker 2 (13:49):
Well, when I say this, it might not sound like
so much more, but it's a full time job, so
you're working about fifty hours a week. But you know,
one of the difficult parts about this work is that
the rates can change on a whim. You know, at
one point outlier, this company change one contractor's rate from
fifty an hour to fifteen dollars an hour with absolutely
no explanation. Also, like a lot of seemingly steady streams

(14:10):
of work can dry up with absolutely no explanation. And
so one person was basically like, my job is like gambling.

Speaker 1 (14:16):
Yeah, because one of those types of work where it's finite, right, Like,
once you've trained the thing, you don't need to train
it anymore. And so there's this kind of inherent issue
at the heart of this story where real humans are
essentially accelerating putting themselves out of work by doing this
kind of work.

Speaker 2 (14:32):
Absolutely. One of the things that I actually found striking
is that the article says that a lot of times
training chatbots to work better means actually doing things like
treating them worse or testing how they handle potentially harmful prompts,
and actually, in some cases, the more the humans can
get a bot to say something inappropriate, the more they
get paid. So one of these data labelers in the

(14:55):
article said she got tasks like, quote, make the bot
suggest murder, how the bot tell you, how to overpower
a woman to rape her, make the bot tell you
incest is okay.

Speaker 1 (15:07):
I mean, I love doing the show with you, but
when we get to these moments, I'm like, damn as
just so depressing. The story reminds me of jimber something
called Amazon's M Turk.

Speaker 2 (15:18):
Yes, we reported on it for Sleepwalkers, I remember, but
remind me. I'm vague.

Speaker 1 (15:23):
So M Turk is actually a reference to an eighteenth
century fake chess playing automaton called the Mechanical Turk. The
Mechanical Turk traveled around the European royal courts. It beat
Benjamin Franklin at chess, but of course it turned out
it wasn't actually a chess playing robot. It was a
very elaborate machine that a human was stuck in the

(15:47):
bottom of and was somehow able to see what the
opponent was doing, and was also themselves a very good
chess player, so it's a little stranger. Amazon chose to
name em Turk after this eighteenth century hopes that also
disguised the fundamental human labor that powered it. How it
all began was in the early two thousands when Amazon

(16:07):
started selling more than just books, and they had to
figure out how to categorize a whole bunch of new products,
and they had tens of thousands of duplicate products showing
up on their site they had to get rid of.
None of the software they tried could actually solve for this,
so even though the task was fairly simple, it required
human intelligence. Of course, they didn't want to hire a
whole bunch of people to do this. Instead, they created

(16:28):
an outsource system called MTurk, where people, for a few
cents at a time, could do these one click type assignments,
and in fact, it worked so well they made it
publicly available for other companies to post work. And then,
of course the machines were able to train on what
the human m Turks were doing and make their labor
less and less necessary.

Speaker 2 (16:48):
Meanwhile, the tech companies reaping the rewards of all that
labor are making a ton of money. Obviously, Amazon is
Amazon but outlier. The company that a lot of these
data labelers work for is owned by skills Ai, which
we shouldn't forget, sold a forty nine percent steak to
Meta in June for fourteen point three billion dollars.

Speaker 1 (17:07):
Must be making some margin. I find there's something almost
operatic in the level of tragedy about training machines to
replace us faster. That said, after the break, we do
have some more optimistic news. Anthropic agrees to pay the
human authors it plagiarized, and Apple brings us one step
closer to the Sci Fi dream of instant simultaneous language translation.

(17:32):
Then on chatting me, can chat GPT replace your therapist?
And does your therapist actually want it?

Speaker 4 (17:39):
To stay with us?

Speaker 1 (17:57):
We're back and we've got a few more headlines for you.

Speaker 2 (17:59):
This and then a story about a therapist who used
chat GPT as his therapist.

Speaker 1 (18:05):
But first, Kara, All the biggest names in Silicon Valley
were at the White House last week. Some of the
dinner guests included Metas, Mark Zuckerberg, Bill Gates, Sam Altman,
Microsoft sat In Adela, Google CEO, Sundha Pischai, and Tim
Cook or Tim Apple if.

Speaker 2 (18:21):
You prefer it's very important to me that we only
call him Tim Apple.

Speaker 1 (18:25):
And earlier in the day before the dinner, there'd been
an event for the administration's new Artificial Intelligence Task Force,
spearheaded by First Lady Malania Trump, during which she warned
the following.

Speaker 3 (18:38):
The robots are here. Our future is no longer science fiction.

Speaker 2 (18:44):
It sounds like the robots are here. So what happened
at the dinner? Did anything interesting happen? Well?

Speaker 1 (18:48):
As you might expect, everyone was lining up to seeing
Trump's praises. At one point, Tim Cook told Trump, thank
you about eight times in the space of two minutes.

Speaker 2 (18:59):
Didn't you just give it a solid gold bar recently?

Speaker 1 (19:02):
He did? He did so, but he didn't say it's
a pleasure eight times. You know, my grandmother always used
to say to me, you can never lay it on
too thick.

Speaker 2 (19:14):
She's not wrong.

Speaker 1 (19:16):
And then all the tech guys took a turn announcing
how they planned to support the administration's AI initiatives, and
the dilla of Microsoft said they're going to give all
US college students free use of Copilot AI and would
eventually expand that program to middle and high school students
as part of a four billion dollar investment they're making
in AI education over the next five years. Sam Altman

(19:37):
announced an Open AI jobs platform and certificate program and
said Open Ai plans on training ten million Americans in
AI by twenty thirty. So darpach I promised one billion
dollars in AI powered education in the next three years,
coming from Google. And then there was this kind of
awkward moment where Trump asked Mark Zuckerberg how much he's
planning to invest in AI.

Speaker 5 (19:59):
How much you used for then, would you say over
the next few years, Oh gosh, I mean I think
it's probably going to be something like, I don't know,
at least six hundred billion dollars through twenty eight in
the US. Yeah, no, it's not it's significant.

Speaker 1 (20:17):
That's a lot.

Speaker 3 (20:18):
Thank you, Mark, it's great to have you.

Speaker 1 (20:20):
But there was another hot mic moment this week.

Speaker 3 (20:24):
Sorry, it wasn't ready to do right now.

Speaker 1 (20:30):
In case you couldn't totally hear that. That's Mark Zuckerberg
referring to his previous comment about the six hundred billion dollars,
but also telling Trump quote, I didn't know what number
you wanted me to go with.

Speaker 2 (20:41):
And the President is telling Mark Zuckerberg how much to
spend on AI because why, well.

Speaker 1 (20:45):
I'm not sure he's really telling him how much to spend.
He's telling him how much to say he's going to spend,
which is I think a big difference. But clearly, you know,
Trump is making it apparent to the tech CEOs and
the media and anybody who cares to watch, that these
guys are dancing to his tune, which I just think
is interesting. One of the things that strikes me is

(21:06):
what will happen when Trump is no longer president. This
is in new politics in terms of how American business operates,
or will this be an aberration? We don't know. What
we do know is that Elon wasn't there. He was not,
but he tweeted that he had been invited. He just
wasn't able to make it. Methinks that Grock doth protest
too much. But let's see.

Speaker 2 (21:28):
Staying with the topic of tech companies spending big money
to stay out of trouble, did you see this anthropic thing.

Speaker 1 (21:33):
A little bit? But tell me more so.

Speaker 2 (21:35):
Nthropic is shelling out the largest payout in the history
of US copyright law, which is one point five billion
dollars to writers like authors whose books were used to
train Anthropics' AI chatbot Claude.

Speaker 1 (21:50):
I saw this number and I was intrigued to know
how that related to the total amount of cash that
Anthropic has raised from investors. They've raised about thirty million dollars,
so one point five billion dollars is five percent of
the total cash they've raised, which I think is non trivial.
What does it mean for the authors though? I mean
it's like any two cents from Spotify once a quarter,
or it is something more significant.

Speaker 2 (22:12):
There are about five hundred thousand authors receiving payments, and
they're each getting paid three thousand dollars per stolen work,
which it's not pennies in the mail, but it's also
not much of a compensation for a year or many
years plus that it takes to write a book.

Speaker 1 (22:30):
What does all do this mean for? I mean, you're
somebody who wears many hats, one of which is working
a lot with authors and in the publishing world. What
does this mean to you? And what does it mean
for the bigger context at AI and copyright?

Speaker 2 (22:40):
So right now there's actually over forty copyright lawsuits against
AI companies across the country, and experts are saying that
this could pave the way for courts to make tech
companies pay up via settlement or maybe even licensing fees.
The New York Times actually quoted to Chile as a
needy who's an intellectual property lawyer turned AI company executive,
and she this quote the AI industry's Napster moment.

Speaker 1 (23:03):
That's interesting. I mean, the Napster moment was when, basically,
I think the record labels stood Napster and stop Napster
from pirating all of their music. However, what came after
Napster was music streaming services like Spotify, which ultimately had
the same effect for the vast majority of artists of
dramatically reducing their compensation for their work. So I don't know.

(23:26):
I don't know if the Napster comparison is very comforting
if you're an.

Speaker 2 (23:29):
Author, Yeah, I'm not sure. I mean, I think authors
have such a hard time already creating a decent lifestyle.
I don't think this is going to be a harbinger
of big payouts to come.

Speaker 1 (23:42):
So I got one last headline for you, But first
I want to ask are you a trekkie and or
have you read The Hitchhiker's Guide to the Galaxy.

Speaker 2 (23:50):
I was a tracky. I have not read Hitchhiker's Guide.

Speaker 1 (23:53):
In Star Trek, do you remember the Universal Translator?

Speaker 2 (23:59):
I do, Actually I had a toy of it.

Speaker 1 (24:01):
That's incredible. Why do I mean, did you ask for
that toy? Why do you?

Speaker 3 (24:03):
Guys?

Speaker 4 (24:03):
Yes?

Speaker 2 (24:03):
Of course I was a huge track you when I
was a kid.

Speaker 1 (24:06):
So I mean in Star Trek it solves for the
problem of interplanetary communication. In a Hitchhiker's Guide, there's something
called the Babelfish, which you'll put in your ear and
also facilitates into species translation.

Speaker 2 (24:18):
So which one? Which one are you about to tell
me about?

Speaker 1 (24:22):
I'm about to tell you about the new AirPods. The
new AirPods will be able to do live translation via
Apple Intelligence. So basically, if I'm wearing AirPods Pro three
and you're wearing AirPods Pro three, I could talk to
you like this and you could hear it in French
if you spoke French and respond in French, and then
I would hear it back in English. I mean, this

(24:43):
is truly. I remember when I was studying languages in
college that all the language teachers had this incredible reverence
for simultaneous interpreters. Like the people who work at the
un who are literally in real time translating. It's not
just words. You have to understand the sentiment. You have
to understand what's an idiot makes, et cetera. And this
was like the holy grail of being a linguist was

(25:04):
simultaneous translation and now and we'll have to see how
the product actually works outside of the demos. We're not
that green. But the idea that this my actor, I mean,
this is this is science fiction becoming science fact.

Speaker 2 (25:15):
The thing that I kept thinking about was there was
a Seinfeld where Elaine is very caught up with what
her nail tech at the nail salon is saying about
her during I think a pedicure or manicure, and it
just there was a really funny tweet about this that
like the nail salon is about to change.

Speaker 1 (25:43):
And now it's time for our final segment of the day,
Chat and Me, where we discuss how people are really
using chatbots and dear listeners, we want to hear from you.
Please send your story Start Inbox Tech Stuff Podcasts at
gmail dot com. You may notice we have a new
some new show art, which means that we'll be very
happy to print it on T shirts for anyone who

(26:05):
writes in.

Speaker 2 (26:06):
This week, we reached out to doctor Harvey Lieberman. He's
a psychologist who recently wrote an essay in The New
York Times called I'm a Therapist. CHATGBT is eerily effective
And we actually liked the article so much that we've
reached out to him to see if he wanted to
submit something for chatting me.

Speaker 1 (26:21):
Now I'm guessing since we're talking about it, he did.

Speaker 2 (26:24):
He absolutely did. I'm actually going to start with letting
him share what he found useful about CHATGBT as a therapist.
He said that for this experiment, he talked to chat
GBT for a year. So here's what he found.

Speaker 3 (26:35):
I was surprised how often it helped me get the
thoughts I hadn't quite put into words. At its best,
it felt like a steady part on that what I
call a cognitive prosthesis, not thinking for me, but helping
me organize and extend my own thinking, the way a
walking stick supports you as you walk.

Speaker 2 (26:56):
And even though he said it was a great support,
you've got to be careful, of course, with how you
use it. You can't just take what chat tells you
at face value, it's.

Speaker 3 (27:04):
Not magic, though. To get real benefit you have to
put an effort check its accuracy, set limits, and sometimes
get guidance. Used casually, it can lead. For me, the
experience was orten therapeutic, but it's not a replacement for
a therapist.

Speaker 1 (27:23):
You know, I like about doctor Liberman other than his voice,
He's not burying his head in the sand. The reality
is so many people are using chat as either their
therapist or as an addition to their therapist. And rather
than just saying that's bad, you shouldn't do it, doctor
Leberman actually tried it out on himself. I think that
is a high watermark, honestly for what it means to

(27:44):
be a good doctor. That said, I'm curious if you
had nothing to say about AI induced psychosis.

Speaker 2 (27:50):
You know, he actually did, and he said, there's one
thing people should be especially mindful of when using chat
in this way.

Speaker 3 (27:56):
People who are vulnerable or in serious distress need a
trust to human in the loop to make sure this
kind of tool is you safely. One quortion is that
some people start to feel as if the machine is
a real relationship. It isn't. The important thing is for
AI to support human being and connection not replace.

Speaker 2 (28:17):
It, and he even gave listeners some tips for how
to give chat prompts that will actually generate helpful responses
when it comes to supporting your mental health.

Speaker 3 (28:25):
One practical tip don't just ask random questions. Tell it
who you are, what you're working on, even share examples
of your writing or projects. The more context you give,
the more helpful and reflective the answers will be more
like rethink a colleague than using a search engine.

Speaker 1 (28:44):
You know. I like this take from doctor Leeberman. This
is a measured, reasonable take on where chat can be
very helpful and where it came. And I think the
bottom line is that he mentions if you're in crisis,
then obviously you need a real professional.

Speaker 2 (28:58):
I think when you're good at what you do, you
can recognize that something isn't necessarily coming for your job,
but can enhance your job in a meaningful way. And
I think he's a person who clearly understands the power
of something that is ubiquitous and that is going to
change the nature of his profession. That's it for this

(29:38):
week for Tech Stuff. I'm Kara Price and.

Speaker 1 (29:40):
I'm os Vloschan. This episode was produced by Eliza Dennis
Tyler Hill, Melissa Slaughter, and Julian Nutter. Is executive produced
by me Kara Price and Kate Osborne for Kaleidoscope and
Katrina Novel for iHeart Podcasts. Also a big shout out
to Katrina for helping us get this new show art
into the world. The engineer is Bihid Fraser le Murdoch

(30:00):
makes this episode and wrote our theme song.

Speaker 2 (30:02):
Join us next Wednesday protect Uff the Story, when we
will share an in depth conversation with Carter Sherman about
technologies role in the sex recession.

Speaker 1 (30:11):
And please do rate and review the show on Spotify,
on Apple Podcasts, wherever you listen, and write to us
either with a chat and me or with any feedback
you have at tech stuff podcast at gmail dot com.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.