All Episodes

May 16, 2025 34 mins

What kind of technology do air traffic controllers use? This week in the News Roundup, Oz and Karah discuss how AI determines your real age, why chatbots can lead to delusions and what to know about a familiar sounding blood-testing startup. On TechSupport, features writer at New York Magazine’s Intelligencer, James D. Walsh, explains how AI-fueled cheating has overtaken college campuses, what students are saying and how educators are trying to address it.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:12):
Welcome to Tech Stuff, a production of iHeart Podcasts and Kaleidoscope.
I'm as Valoshian and today Karra Price and I will
bring you the headlines this week, including the selfies that
tell you how old you really are. Then, on tech Support,
we'll talk to New York Magazine's James D. Walsh about
using AI to cheat your way through college.

Speaker 2 (00:33):
And it quickly kind of dawned on me that everyone
is cheating. They may not be using the word cheating,
but they are cheating according to their honor code.

Speaker 1 (00:42):
All of that on the weekend Tech It's Friday, May sixteenth.
Hello Cara, hi ozgo next adjacent Lee, When was the
last time you flew into Newark Airport?

Speaker 3 (00:53):
You know that Newark is normally my secret weapon airport,
but I haven't flown into Newark for quite some time.

Speaker 1 (00:59):
Especially into nationally. Actually, it is much faster getting through
the customs and border patrol. It is. However, it is
not a good time to be flying in and out
of New York right now. In the last few weeks,
there have been three telecommunication failures at the air traffic
control center that oversees the airport. The first outage late
last month lasted over a minute.

Speaker 3 (01:17):
Like, this is the stuff that makes me not want
to ever get on an airplane.

Speaker 1 (01:20):
Yeah, I mean this, truly think about it. This is
when air traffic control has no contact at all with
the planes in the sky, like none, none, and they're
just hoping for that minute fingers cross they don't crash
into each other. And of course there are these compounding
delays afterwards because the poor air traffic controllers get so
stressed by this, they have like PTSD.

Speaker 3 (01:38):
Yeah, I've actually heard that air traffic control is one
of the more stressful jobs that you can have in
this kind of.

Speaker 1 (01:44):
Stry especially when the systems are going dark and you
have dozens of planes in midair that you can't communicate.

Speaker 4 (01:48):
With and nobody else wants the job, so you're overworked.

Speaker 1 (01:51):
Absolutely.

Speaker 4 (01:52):
But why is this a tech story?

Speaker 1 (01:53):
Well, good question. The outages are being blamed in part
on systems that rely on old technologies.

Speaker 4 (02:00):
Like what are we talking about?

Speaker 1 (02:01):
Old like floppy disc gold so my child. Yeah. Basically, well,
here's how a former air traffic controller described it. If
you look at the technology we're using, most of it
is from the late nineteen eighties to the nineteen nineties,
we still use floppy discs to update our information system.
We still have paper strips. I mean, this is not
the eighties or nineties. This is the ancient Egypt. We

(02:24):
use paper strips that we walk around the tower cab
with that each controller writes something on and then hand
it to the next controller.

Speaker 3 (02:31):
I just I was thinking about, like if Jen Alpha
is like, you know what, I'm only going to hand
in my homework on floppy disc now, because that's what
we used to do when I was growing up, Like
you would upload your homework to floppy disc and then
bring it to class and print it from school.

Speaker 1 (02:43):
Well, I heard that in Japan there's a new trend
of fake digital cassette player. So it looks like a
set player, but it's actually an MP three player. But anyway,
you know, I'm a big local news guy.

Speaker 4 (02:54):
You are you are.

Speaker 1 (02:55):
NJ dot Com reports that the technology at Newark is
so outdated that when parts need to be replaced, the
FAA has to source them from eBay.

Speaker 4 (03:06):
You know, this is how Kim kardash she needs to
buy her BlackBerry.

Speaker 1 (03:08):
She did after they have continued sub she.

Speaker 3 (03:10):
Would go on eBay and buy like fifteen or twenty
of them smartly. But she's also not working for the FAA.
But in all seriousness, you know, in the FAA is
a federal agency. So what is the Trump administration doing
about this?

Speaker 1 (03:23):
Well, there is a three year plan to build big,
beautiful new traffic control systems with high speed network connections
and fiber and wireless connections, you know, but the US
obviously has real struggles with modernizing its infrastructure. I tried
to find out, in preparing for this episode whether or
not they use floppy disks in Chinese airports. I couldn't
get an answer, but my guess would be no.

Speaker 4 (03:46):
Is that because you don't know the Chinese words for floppery?

Speaker 1 (03:48):
That's probably that? MUS see what it is. And the
reason I've attract to this story is because we rarely
think about technology in terms of like stuff from the
eighties that still tape together and keeping us relatively safe
at thirty eight thousand feet.

Speaker 3 (04:01):
Yes, but as is evidenced by my continuous use and
support of Apple's wired headphones, technology does not necessarily mean
the future or even the present. Technology is very much
the past.

Speaker 1 (04:13):
The other thing that attracted me to this story was
that I got to experience some rare British pride for
the first time since the Spice Girls. I feel like
that's not true. So The Financial Times reported this week
that the British Airways CEO has hailed the quote game
changing effects of AI for cutting delays at the airline.
They've invested one hundred million pounds in quote operational resilience,

(04:36):
which includes using AI to suggest how to minimize passenger
disruptions like when to delay flights, whent to cancel them,
when to preemptively rebooked passengers, and even which gates aircraft
should land at to help passengers make tight connections. In
the process, the National Flag Carrier has gone from one
of the most delayed airlines in the world it's one

(04:57):
of the least.

Speaker 3 (04:57):
Oh good for them, actually, I'd love to see some
national pride for you, because I also love to see
some support for those poor, poor schmucks at the back
of the airplane who are like I have a connection
in seven seconds an ethrow.

Speaker 1 (05:12):
But there is something quite sort of reassuring about human
nature that most people do feel some of enough empathy
for their true people do come together.

Speaker 4 (05:19):
People go go right ahead, go right ahead, unless you're pregnant,
which case, we're gonna black you.

Speaker 1 (05:23):
So to change landing gears slightly. Tech is obviously a
system that can govern outcomes of tens of thousands of people,
and that's where you see it showing up, for example,
in air traffic control or airline management. But the other
kind of side of tech that I find particularly fascinating
is how it's becoming a way to make us legible,
Like our tech is making us readable to others, which

(05:46):
is kind of fascinating and creepy. Scientists at mass General
Brigham in Boston have developed a new AI prediction tool
that can identify a person's biological age just by analyzing
a picture of their face to.

Speaker 3 (06:00):
Be clear biological age and real age, like I'm thirty five,
but I might have a different biological age.

Speaker 1 (06:07):
It's basically how old you are at a cellular level,
based on the condition of your DNA, and it's different
from yet what scientists called chronological age.

Speaker 3 (06:16):
So someone could be forty years old that's their chronological age,
and have a biological age of thirty five, which I'm
assuming is you know, a sign of good health.

Speaker 4 (06:25):
Yeah, exactly, to have a lower biological age.

Speaker 1 (06:28):
That's right. And so there's an app for this. Guess
what it's called dead or not hot or not No,
it's called it's called face age augh. And what's kind
of interesting is the way they trained it. So researchers
gave face Age thousands of publicly available pictures of people
over the age of sixty who presume to be healthy.

(06:49):
Then they gave it pictures of cancer patients who are
beginning radiotherapy treatment. On average, they found that someone going
through these treatments has a biological age that's five years
older than their chronological age, and the older the biological
age is, according to face Age, the worst of survival outlook.
This is according to an article in the Washington Post.

Speaker 4 (07:09):
And why would anyone want to know this?

Speaker 1 (07:12):
Well, good question. It is not just curiosity. The Post
explained how the tech could actually be a life saving
tool because it can be useful in predicting tolerance for
cancer treatments. This is something doctors are obviously constantly grappling with.
In one case in the article, a doctor had a
patient who was eighty six years old who'd received a
terminal lung cancer diagnosis, and the doctor was hesitating at

(07:34):
whether or not to recommend treatment because of the patient's
advanced age, but according to the doctor quote, he looked
younger than eighty six to me, and based on the
eyeball test and a host of other factors, I decided
to treat him with aggressive radiation therapy. The patient survived
and is now ninety years old.

Speaker 4 (07:51):
So just humor me here, what does this have to
do with face age?

Speaker 1 (07:56):
Well, it's kind of like the eyeball test in the
digital world world right, And the doctor actually went back
and scanned an old photo of his patient using face
age and discovered that the app basically posts fact endorsed
the assessment. The patient's biological age was ten years younger
than his chronological age. I he was biologically seventy six
when he started treatment, and therefore was a good candidate. Obviously,

(08:19):
a person's face isn't the only indicator of their health,
and the tool is used alongside other clinical information, but
per the post, it does do a better job of
predicting someone's chronological age than a doctor just using their
eyes alone, just the eyeball test.

Speaker 3 (08:33):
You know, it makes me think about the old adage
that I like to use as it pertains to women
and men, which is that you can't judge a book
by its cover, and you have to wonder if plastic
surgery in botox is as effective at tricking the AI
as it is to the human eye.

Speaker 1 (08:47):
Funny you mentioned, you know, scientists are actually still studying
if light, surgery, makeup, or other factors can affect the
accuracy of the face age reading. Although interestingly, and this
is something that I find very encouraged as a man
experiencing the beginning of baldness, or perhaps the mid stage
of baldness, face age does not overreact to the visual

(09:08):
cues of aging, like being baled or having gray hair
in the way that humans do. But just taking a
step back, I think the implications of this story are
actually really really big because in the old days, it
would have taken a doctor decades of clinical experience to
develop their own sense of intuition about somebody's biological age
versus their chronological age. You know, they would have developed

(09:30):
the clinical experience and then used it to make a
to make a judgment that they probably couldn't have explained
to you themselves how they got to. But that knowledge
was captured within a community of people who were trained
and trusted to use that information for good according to
the Hippocratic Oath, this app points in the direction of
a future where anybody will be able to essentially tap

(09:54):
into intuition and take a photo of your face and
know your biological age and know you know how much
longer you may have to live with some degree of accuracy.
This isn't happening today, but it could happen soon. That can,
of course be empowering if you want to take a
selfie and know what's going on and maybe make some
changes to your lifestyle perbs. But on the other hand,

(10:18):
having bad actors or actors who don't have your best
interests at heart. Other people, colleagues, bosses, health insurance companies
should make us all I think, deeply concerned.

Speaker 3 (10:28):
Yeah, I mean, especially with insurance companies. It creates a
huge moral problem.

Speaker 1 (10:34):
Absolutely.

Speaker 3 (10:35):
So I want to tell you about a headline this
week that frightened me more than the realization of how
old I am.

Speaker 4 (10:40):
When I attended the Webbies.

Speaker 1 (10:41):
Okay, what you're doing at the Webbies.

Speaker 4 (10:43):
I was invited, I was invited.

Speaker 1 (10:46):
Goes somewhere.

Speaker 4 (10:47):
Yeah, let me tell you something.

Speaker 3 (10:50):
You know how often well you might think differently, I
don't say no as much as I used to at
my chronological age, I say no less.

Speaker 1 (10:56):
Yeah, yeah, well you've got to make the most, make the.

Speaker 3 (10:59):
Best of it, exactly. But I read a story this week.
Have you heard of chat GPT induced psychosis?

Speaker 1 (11:06):
I have to confess I have not, so it is.

Speaker 3 (11:10):
Kind of vague, but I got it from a Rolling
Stone headline that read people are losing loved ones to
AI fueled spiritual fantasies.

Speaker 1 (11:20):
So this is like AI kind of becoming a digital
cult leader or something like that.

Speaker 4 (11:26):
Pretty much.

Speaker 3 (11:27):
And it's putting stress on the relationships of the people
who have to deal with people who think they're accessing
this sort of rules to the universe.

Speaker 4 (11:36):
Through chat GPT.

Speaker 1 (11:37):
Wow.

Speaker 4 (11:38):
You know, several people.

Speaker 3 (11:39):
Reached out to Rolling Stone about how chatbot use is
getting in the way of their relationships. People even said
that their partners were communicating with chat gpt as if
it were a savior figure, and in some cases the
chatbot would say it was God, wow, or tell the
user they were God.

Speaker 1 (11:55):
That's worse, is much worse.

Speaker 4 (11:57):
So here's the opening story of the article in Role Stone.

Speaker 3 (12:01):
A couple's marriage is falling apart because a woman's husband
started to use chat GPT obsessively, and it's not the
way you or I use CHATGPT. This is like someone
who's being radicalized on YouTube, except YouTube is talking back
to it. The husband was using it as a spiritual guide.
He was asking chat GPT philosophical questions and getting increasingly

(12:24):
personal in his responses, revealing more and more of himself
along the way. And then this same person starts to
get paranoid about the government surveilling him, and he says
that AI helped him regain a repressed childhood memory of
a babysitter trying to drown him.

Speaker 1 (12:41):
I mean, this is making my mind go in all
sorts of different ways. I mean, you know, you can
imagine if you're spending all of your time talking to
chat GPT and feeling so well understood that it kind
of could exacerbate the feelings. Well, if chat GPT can
understand me, why can't my husband or my wife. But
what you mentioned about the repressed childhood memory is also
very interesting to me because I find that quite disturbing. Obviously,

(13:03):
we've talked a lot about AI assisted therapy bots and
the promise of them, but at the point where chat
GPT is, you know, helping people or convincing people that
they're recovering childhood memories without any training or without any guardrails.
I mean, that gets quite dystopian and concerning. To me.

Speaker 3 (13:22):
It takes quite a leap to believe that a chat
bot or a search a very sort of high falutant
search tool, is something that is capable of allowing you
to recover repressed memory. It's one thing, I mean, repressed memory.

Speaker 1 (13:39):
Hold on, but what about journaling? True, this is two
way journaling.

Speaker 4 (13:43):
It's two way journaling, but the other side.

Speaker 1 (13:46):
You're on another side who is predisposed to encourage you
in whatever direction you're going. I think that's what's concerning
about it.

Speaker 3 (13:51):
I also think it says more about where very seemingly
personalized technology fits into an increasingly godless world, which is
replacing religion with generative AI that seems friendly, more readily
available than your average guru therapist, and probably less judgmental
than your wife or.

Speaker 1 (14:10):
Husband, certainly more prone to tell you what you want.

Speaker 3 (14:12):
To hear, definitely, and also just answering you whenever you want.
I mean, I think that's something for free.

Speaker 1 (14:18):
We know that chatbots tend to serve users things that
it knows they'll like. And last month there's a story
about how open Ai actually rolled back an update to
chat GPT's for o model because it was acting Tuesday
authentic towards users, like constantly telling them they were geniuses
or had amazing ideas, laying it on perhaps a bit
too thick, as my grandmother would have said.

Speaker 3 (14:40):
You know, it's sort of like when I'm with my
mom on Mother's Day and she asked me to get
off my phone, and I'm like, will be as interesting
as my phone, and I'll start paying attention to you
on Mother's Day.

Speaker 1 (14:49):
I give her Mother's da exception, except well, I.

Speaker 3 (14:52):
Mean, look, the phone is a very seductive tool. Yeah,
and it's an always on supercomputer that gives you, and
I quote from the the article the answers to the universe.

Speaker 1 (15:02):
And that of course, that kind of black box nature
obviously adds to the feeling of mystic or spirituality. I mean,
I think you look back to Victorian times and mesmerism
and you know, various quackery and stuff. The thing that
made people believe was not understanding how it worked. It
had to do with playing on that sense of the
unknown and filling it with meaning that was maybe non

(15:24):
appropriate meaning, and it feels like that's now happening on
a kind of cross societal and extremely technologically boosted scale.

Speaker 4 (15:33):
Yeah.

Speaker 3 (15:33):
And I think conversely, it's why people shouldn't read too
much into the advice that chat GBT gives them.

Speaker 1 (15:40):
Yeah. I mean, I think that's something we all have
to remind ourselves of continually, because it's so tempting and
often it is so useful. We've got a couple more
headlines for you this week. Newly elected Pope Leo the
fourteenth says that he takes a similar position artificial intelligence

(16:01):
as his predecessor put Francis. According to CNN, the new
Pope laid out the vision for his papacy and he
identified AI as one of the most critical matters facing humanity.
He says that the development of AI, like the original
Industrial Revolution, would quote pose new challenges for the defense
of human dignity, justice, and labor.

Speaker 3 (16:22):
So one drop, which is my chosen rap name for
Elizabeth Holmes, the infamous founder of fraudulent blood testing company Thearranos,
is in prison yep, but her partner is working on
a new venture that sounds a little bit familiar. The
New York Times reports that Holmes's partner, Billy Evans, is
raising money for Hymanthus, a blood testing startup that describes

(16:44):
itself as quote the future of diagnostics. And here's the kicker.
The device Billy Evans is showing two investors looks eerily
similar to the one hawked by Theranos. One woman's trash
is another man's treasure.

Speaker 1 (17:01):
Yeah, I mean, I think you might need a blood
test yourself if you were, if you're queuing out to
invest in that one. Our friends at four or four
Media wrote about an ad for Coca Cola that used
AI to scan books and surprise, surprise, got some basic
facts wrong. Last month, the company released an ad campaign
which featured passages from classic literature that mentioned Coca Cola
by name. The problem is AI came up with some

(17:24):
examples that simply don't exist, including a sentence from a
book by British author J. G. Ballard featuring Coca Cola
that he never wrote.

Speaker 3 (17:34):
Staying on the topic of AI, AD's beloved actress Jamie
Lee Curtis asked Meta CEO Mark Zuckerberg to remove an
AI generated commercial on Instagram that she claimed stole her
likeness and it worked. According to the San Francisco Chronicle,
Curtis posted on Instagram saying, quote, it's come to this
at Zach and then implored him to remove this quote

(17:56):
totally fake commercial from the internet. I can hear her
saying that. Curtis said she went through every proper channel
and even tried to DM Zuckerberg.

Speaker 1 (18:04):
Is that one of the proper channels that is?

Speaker 3 (18:06):
I mean, if you're verified, he's verified, We're verified. Let's
get together, but was unable to reach him since he
does not follow her on Instagram, so the post was
removed within two hours of Curtis's post, which, by the way,
was set to a wreath of Franklin's song Integrity. Now,
can Jamie Lee Curtis get Zuckerberg to take down all

(18:26):
the photos of himself wearing a gold chain?

Speaker 1 (18:30):
Methinks not. And we're going to take a quick break now,
but stick around because cheating is on the rise, and
college is getting a whole lot easier to stay with us.

(18:50):
Welcome back to tech Stuff. This week on Tech Support,
we want to dive deeper on a headline that we
touched on last week in New York Magazine. Everyone is
cheating their way through college. The story start with me
because it's one of those examples of a story which
isn't just about a new technology changing the way we
do old things, in this case college assignments or cheating

(19:11):
on college assignments. It's about tech posing a challenge to
our entire system of learning in fascinating ways, and with
consequences we can't begin to fathom.

Speaker 4 (19:21):
I actually think we can fathom the consequences of this.

Speaker 3 (19:23):
I think as we step into a future of entirely
friction free existence, it's really interesting to see the ways people,
especially younger, more digitally native generations, skirt around the hard
parts of being a person. I read this article and
was consistently asking myself the question, if given the opportunity
to use something that made homework way easier, wouldn't I

(19:44):
use it?

Speaker 1 (19:45):
You know?

Speaker 3 (19:45):
I literally this morning tried to get chat GBT to
summarize an article for me. It was the best of times,
it was the worst of times.

Speaker 1 (19:54):
Well, at least you've read your Tailor of Two Cities
by Charles Dickens. I'n Sparkner without other ado join us
discuss how AI is roiling education is James Walsh, a
features writer, at New York Magazine's Intelligencer, James, Welcome to
Tech Stuff.

Speaker 2 (20:11):
Thanks so much for having me.

Speaker 1 (20:12):
When did you first start to get interested in how
AI is changing college education?

Speaker 2 (20:18):
Well, it actually started a few months ago, and I
just started calling college students, talking to college students, and
it quickly kind of dawned on me that everyone is cheating. Right.
They may not be using the word cheating, but they
are cheating according to their honor code.

Speaker 3 (20:36):
And you open the piece with the story of the
Columbia student Roy Lee. He gained notoriety for hacking coding
tests big tech uses to assess internship applications. Why did
you want to tell his story and what is the
bigger takeaway from his story?

Speaker 1 (20:51):
Sure?

Speaker 2 (20:51):
I think Roy was fascinating to me because a number
of things. First, in order to prepare for interviews with
big tech companies like Google or any really big tech company,
he would work on leak Code. It's it's this site
that trains developers how to do these kind of puzzles

(21:12):
or riddles that he doesn't really think are applicable to
any kind of real world work. So he figured that
if he could develop something that would hide AI on
his browser, during a remote job interview, he could hack
these interviews and that it's not really cheating to him.
If it's hackable and there's a tool that can be

(21:33):
used to hack an assignment. He was thinking that if
not now, then in the near future it won't be
considered cheating. And it's very much the same way he
approaches studies. It's transactional to him. He had no interest
in kind of furthering himself for learning new things about
himself or about the world. He's there as a networking opportunity,

(21:54):
and he told me he's there to be, you know,
a co founder and a wife. Roy was singular in this,
but I think the idea that if it's hackable, why
am I learning this was something that resonated throughout all
of my interviews. And that's not just you know, sort
of a logic. It's also just kind of outside pressure
to excel, to get really good grades. If they feel

(22:16):
that pressure, they're going to use this tool.

Speaker 3 (22:18):
Yeah, and that goes to I guess a larger question
I have, which is, you know, after reporting this article,
how common is it for college students to cheat using
just AI tools?

Speaker 4 (22:29):
Now, Oh, I.

Speaker 2 (22:31):
Think it's incredibly common. You know, our headline says everyone
is cheating, and I don't think that's I don't think
that's far off. One of the fascinating parts of this
article to me was talking to students and not using
the cheat word and watching them kind of work through it.
One of the students I talked to, Wendy, you know,
started our conversation by saying, I am against cheating. There

(22:52):
is a student handbook, and i am against cheating. I'm
against plagiarism. I'm against copy and pasting from chatgbt into
a document. And then she proceeded to tell me exactly
how she uses chat gpt to write all of her papers.

Speaker 1 (23:06):
Now, was she the one who was using chat shept
to write the paper about how different modes of education
affected students callati development and you also, she's seen the
irony that she was using chatchipt to write this paper,
and she basically hadn't exactly Yeah, that is fine.

Speaker 4 (23:23):
That's because chatchibt hasn't taught her about irony, right.

Speaker 2 (23:26):
Hasn't covered that yet.

Speaker 1 (23:27):
Well, she hadn't had time to think about it herself,
but she offloaded all of the work.

Speaker 2 (23:30):
I mean, it's remarkable, Listen, I'm coming clean. I peaked
at spark notes every once in a while when I
was in college, so of course, but with spark notes
sort of like something that I relied on every single
time I got stuck. No, I think I had to
work through assignments a lot more often that I could
easily hack them.

Speaker 3 (23:48):
The other thing that has changed a lot since I
guess we were students because you just mentioned spark notes
is like I was not contending with the allure of
anything but a Facebook wall, and one of the women
that you spoke to was talking about not so much
how hard school is, but how hard it is to
navigate all of the other digital distractions like TikTok, snapchat, Instagram,

(24:14):
And I guess, just based on you know, you're reporting
for this article, to what extent does the use of AI,
whether we're calling it cheating or enhanced studying, To what
extent does the pre existing digital landscape co opt the
ability to actually participate in the thing that you or

(24:34):
your parents or student loans are being used to send
you to university.

Speaker 2 (24:39):
I don't know, Yeah, I mean, I think they're contending
with the swirl of whether it's social media or just
you know, the attention economy. The fact that chatgbt dropped,
you know what the end of twenty twenty two is
fascinating because we just figured out social media in school,
We're finally taking that seriously, and suddenly it's like, oh, well, here,

(25:01):
we're going to offer the greatest cheating tool that has
ever been created to co op people's attention. I mean,
I think one of the fascinating parts of this to
me was the kind of introduction of these websites cheg
and course Hero, which I you know, I didn't have
when I was an undergrad, but in a way, it
was like priming students to think it was okay to cheat.

Speaker 1 (25:20):
These are websites where you could pay like an outsill
service to do your work for you.

Speaker 2 (25:24):
Right, And you know, a website like cheg was employing
something like one hundred and fifty thousand experts, mostly in India,
who would provide answers to questions in thirty minutes or less.
And then chatgybt comes and you just see chegs stock
price just tanks because it was like one cheating tool

(25:44):
replacing another.

Speaker 1 (25:46):
Is there an awareness that actually as fun and as
thrilling heck, this is that there's a real long term
price being paid in terms of how your mind is developing.

Speaker 2 (25:55):
Yeah, I mean, I think there's certainly awareness on the
part of the professors and the people who are concerned
about that. There is, i will say, an awareness among students.
A lot of students were willing to engage in this,
and I was surprised by how many of the students
this was the first time they were having this conversation,
but they were eager to talk to me about this.

Speaker 3 (26:13):
How forward thinking are academic institutions and educators about this,
like on the other side of things, because it's like
cheating is a little bit in the eye of the
cheater or is it in the eye of the place
that cheated cheated? Yeah, I mean that was such a
thing when I was younger.

Speaker 1 (26:31):
Which is like you're plage cheating yourself.

Speaker 3 (26:33):
Yeah, well, you're only cheating yourself, but like it was
also you could be kicked out of school, right right,
So to what extent are academic institutions like trying to
regulate this?

Speaker 2 (26:41):
Yeah, the approach that most schools are taking is kind
of ad hoc. It's leaving it up to professors to
decide how they want to handle this. I'm kind of
sympathetic to that because it's such a difficult thing to regulate.
How on earth do you tell students to use something
that can help them and it's so difficult to cat,
you know, And so professors say, either you know, use it,

(27:03):
don't use it, or if you do use it, please
cite it, please provide a receipt you know that shows
the conversation you're having with chat GPT, so I can
watch kind of the gears turning. But again, it's really
hard to catch AI cheaters. You know. They have these
detection tools that really vary in their effectiveness, and even
if you are able to catch somebody using it just

(27:25):
by copy and pasting, you can't really catch somebody who's
just using it to generate ideas or generate topic sentences
and rewriting. And you can always launder AI text through
other AIS so that an AI detector can't really catch it.
So schools have quite the challenge in front of them.
And the challenge I think is convincing young students as

(27:46):
they come to their school why it's in their best
interests not to use AI.

Speaker 1 (27:51):
Well, it's a fascinating moment for elite higher education institutions
in general, right, obviously in the cross hairs of the
t Trump administration. There was that David Brooks piece in
The Atlantic about three months ago about how kind of
elites and elite universities had failed and you were starting
to pay the price, and people were wondering whether the

(28:11):
price tag of going was even worth it. So there's
this kind of, like now tremendous new accelerant to those issues.
In terms of the neurological development side. Did you speak
to any cognitive psychologists or neuroscientists about what this. I
think I've heard the term cognitive offloading before, but like,
what's this doing to brains? Sure?

Speaker 2 (28:31):
I mean I tried to dig into the research that's
out there on cognitive offloading, and there are a few
studies here and there that kind of show that, you know,
reliance on AI will reduce critical thinking. That's not necessarily surprising.
But I didn't want to lean too heavily on that

(28:52):
researcher or delve into it too much because it's so early,
and so our viewers kind of let the students speak
for themselves. I mean, what's fascinating to me is how
quickly this is happening. Also right the sudden realization. I
was like, oh my god, half of college students have
never known college without access to this. I do think
sooner rather than later, we'll kind of have a better

(29:13):
understanding of what's happening to people's brains.

Speaker 1 (29:16):
I thought that was what was particularly brilliant about your piece.
It was almost like a piece of anthropology in terms
of you got to hear students in their own words,
wrestling with this problem. There's something which is very tempting
in front of them that they know is bad, but
they don't know what else to do. And I thought
that the drama of that really came across.

Speaker 3 (29:33):
I felt a little bit of nostalgia when I was
reading your piece for the Forward Thinking Vigilante TI eighty
three Hacking Cara that existed when I was in twelfth grade,
and I was just, you know, I'm past the point
of using chat GPT for cognitive offloading to a certain extent,

(29:55):
because it doesn't feel native to me now, and I
just wonder, I don't want to use the word worse,
but just more ubiquitous in terms of you know, students
using it as a method of, you know, skirting.

Speaker 1 (30:08):
The opportunity to develop their own critical faculties.

Speaker 3 (30:11):
Yeah, and also just to the point of like people
want to spend time doing other things. I think that's
always been true, right where it's like you went to
college and you're like, well, I'd rather be hooking up
or partying or doing something else. Now it's like I'd
rather be on TikTok, Snapchat and Instagram than do my homework,
you know. So it's I think that's the difference. Is like,
it also is taking out a socialization piece, which is

(30:34):
really was a part of going to school as well,
which was like I'm going to talk to my peers
about what they're writing about. We're going to maybe sit
in the library together and confer. Now it's like a
reseitting around talking about how chat GBT you know, is
helping us write a chaucer essay.

Speaker 2 (30:49):
Sure, I mean something that comes up time and time again,
is you know, it's the kind of one on one
learning that students can do with chat GBT. They have
this brilliant ta at their fingertips at all times and
over and over talking to students are like I do it.
It's instead office hours, it's instead office hours. I talked
to professors who said, like, our office hours have just tanked.

(31:10):
People aren't showing up, and of course something is lost, right,
And you know, I'm I was kind of shy in college,
Like I took some amount of like staring in the mirror,
being like, all right, you're going to show up to
office hours, And I think I got something out of that.
So I think that the loss of those interactions is
going to be measurable in some ways as well.

Speaker 1 (31:31):
James. Just to close, one of the most surprising moments
in Your Peace is a quote from Sam Altman, who
said before Congress, I worry that as models get better
and better, users can have sort of less and less
of their own discriminating process. Surprised to hear that from him,
I'm curious, did you reach out for traditional comment and
how are the tech companies as a whole responding to

(31:54):
this phenomenon? Right?

Speaker 2 (31:56):
Sam Altman said that in I think twenty twenty three,
we of course reached out to Open Air for comment,
and they pointed us toward just their education platform. You know,
I think this is something that the platforms are very
aware of, and even in the context of education. I
spoke to somebody from Anthropic about this on their education team,

(32:19):
and he said that they had expected students to be
some of the you know, the earliest adopters, but they
were still shocked by how true that is and how
much adoption there is on college campuses and so. And
they are also you know, concerned about the implications of that.
You know, open ai has apparently reportedly its own watermark

(32:43):
that would effectively cut down on plagiarism, but has chosen
not to release it. So, you know, I'm really interesting
what those conversations inside the company are like as well.

Speaker 3 (32:52):
I mean, the fact that open a I could use
a water mark but refuses to really shows me who
these companies are targeting.

Speaker 2 (32:59):
I mean, one of the most dystopian things that happened
to me while we were closing this piece. I got
like the push alert about Google launching this AI chatbot
for children under thirteen, and it was just like more
evidence that all these platforms are in a race to
capture this kind of like loyalty among younger users. And
it just seems like a moment when we should kind

(33:21):
of all be putting the brakes on.

Speaker 1 (33:25):
James.

Speaker 4 (33:25):
Thank you, thank you, James, thank you very much for
having me. That's it for this week for text Stuff,
I'm Kara Price and.

Speaker 1 (33:39):
I'm mos Veloschin. This episode was produced by Eliza Dennis
and Victoria de Minuez. It was executive produced by me,
Karra Price and Kate Osborne of The Kaleidoscope and Katrina
Novel for iHeart Podcasts. The engineer is Phite Fraser and
Kyle Murdol mixed this episode. He also wrote off theme song.

Speaker 3 (33:56):
Join us next Wednesday for text Stuff The Story and
we will share an in depth conversation with Sir David
Spiegelhalter to discuss all things risk in life, love, and
of course tech.

Speaker 1 (34:08):
Please rate and review the show on Apple Podcasts or
Spotify or wherever you listen to your podcasts, and you
can also write to us at tech Stuff podcast at
gmail dot com. We really like getting your feedback.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.