Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to the Deep
Dive, your shortcut to being
truly well-informed.
Today we're diving intosomething really central to our
lives how artificialintelligence is rapidly changing
education.
We're not just looking at theheadlines.
We want to unpack the actualexperiences, the perspectives of
students, educators, academicadministrators.
Our mission really is tounderstand how we can navigate
(00:21):
this new frontier.
How can we make sure AIgenuinely enhances learning
without, you know, sacrificingthat essential human element?
Speaker 2 (00:28):
And to guide us.
We've got a really powerfulsource.
It's a comprehensive globalresearch report from April 2025
by Turnitin, and this isn't likea small study.
It's based on a survey of 3,500respondents we're talking 500
academic administrators, 500educators and a big chunk 2,500
students, and they're spreadacross Australia, india, mexico,
(00:48):
new Zealand, the UK, irelandand the US.
So it gives us this uniquelybroad look at AI's real-world
impact.
Speaker 1 (00:55):
Right, and our goal
here isn't really to pick sides,
is it?
It's more about peeling backthe layers of this, well,
incredibly complex landscape.
We want to uncover, maybe somesurprising insights, understand
the core tensions and pinpointwhere guidance and support are
just desperately needed for you,the learner, and for, well,
everyone in education.
And straight away, somethingreally jumped out at me.
(01:16):
It kind of challenges commonassumptions.
Students are actually moreconcerned about AI's impact than
educators or administrators areand, what's maybe even more
striking, they're activelylooking to educators for
guidance.
What do you make of that?
Speaker 2 (01:28):
just as an opener,
it's a profound way to start,
isn't?
It Completely reframes who wemight think is just jumping on
the AI bandwagon.
What's fascinating is thereport actually found a strong
78% students, educators, adminsfeel positive about AI's impacts
.
That's a huge wave of optimismabout its potential 78%.
Speaker 1 (01:47):
That's really high.
But then you flip the coinright.
A staggering 95% of allparticipants believe AI is being
misused somehow at theirinstitution.
So almost everyone sees thegood, but almost everyone also
sees the bad.
It feels like a fundamentalcontradiction.
Is this just, you know,excitement about the potential
(02:08):
hitting the messy reality of now?
Or is it something deeper, likeethical worries people haven't
squared yet?
Speaker 2 (02:14):
I think you've hit
the nail on the head there.
It really suggests that whilethe vision for AI in education
is grand and mostly optimistic,the actual doing of it, it's
full of ambiguity, and thatleads straight to that feeling
of misuse, because the clearlines, the shared understanding,
they just aren't there yet.
Speaker 1 (02:34):
And part of that
ambiguity, it seems, comes from
just a basic disagreement onwhat cheating even means with AI
.
The report shows a reallysignificant difference in
perception there, especiallyabout using AI to write like a
whole assignment.
Speaker 2 (02:41):
It's a critical point
.
Yeah, only 45% of academicadministrators think using AI
for the whole thing is cheating.
Compare that to 55 percent ofeducators and then students 63
percent.
They're more likely to see itas cheating than the people
setting the policies.
Speaker 1 (02:57):
Wow, that's a huge
gap.
I mean, how can you have clearrules when the people making
them, enforcing them and thestudents under them, how can
they define the main problem sodifferently?
It feels like everyone's got adifferent rule book.
Speaker 2 (03:08):
It really does.
And this confusion it isn'tjust about words, right.
It undermines trust.
It creates this environmentwhere students are unsure what's
okay and educators well, theystruggle to be consistent.
It absolutely screams forclearer policies, for a shared
understanding of how and when AIcan be used.
That's really step one totackling this widespread unease
(03:29):
and, frankly, distrust.
Speaker 1 (03:31):
That deep confusion
around cheating definitely fuels
the unease, and it sounds likethat unease is part of this
bigger overwhelm factor.
The report talks about thisfeeling that AI just exploded
and everyone's scrambling.
They even call it well, wedon't know.
We don't know.
Speaker 2 (03:44):
That's a perfect way
to put it.
The sheer number of tools, thevolume of information, it's
overwhelming almost everyone.
We're talking 80% of educators,73% of students, 72% of
administrators feeling this way.
It's not some small issue, it'ssystemic, it's affecting pretty
much everyone.
Speaker 1 (04:01):
And, like we
mentioned, that student anxiety
point is major here.
Everyone feels overwhelmed,sure, but students seem
particularly worried 64% ofstudents expressing worry about
AI use.
Compare that to 50% ofeducators, 41% of admins.
It just flies in the face ofthe idea that students are
blindly electing AI.
They're really concerned abouttheir learning.
Speaker 2 (04:22):
And this widespread
unease, especially from students
.
It links directly to afundamental lack of clear
guidance for the educatorsthemselves.
If educators aren't fullyequipped or confident well, that
naturally limits how well theycan use AI for students or the
institution.
There's a guidance gap, youknow, and students are right in
the middle of it.
Speaker 1 (04:39):
Which leads us
straight to the educator's
burden, doesn't it Sounds liketeachers are really at a
crossroads.
They're being asked to do a lot.
Speaker 2 (04:49):
That's the core
challenge, isn't it?
The report's clear Educatorsare seen as the key, the key to
helping students navigate AI,both for their studies now and
for, you know, being AI readyfor work later.
They're expected to be theguides on the ground in this new
landscape.
Speaker 1 (05:05):
But there's this huge
knowledge gap.
The report says 50% of studentsadmit they don't know how to
get the most out of AI and theylook straight to their teachers
for that help.
Yet over half of educators 55%also say they lack the knowledge
to use AI effectively forteaching, even for their own
admin tasks.
So if students are looking upand teachers are struggling too,
where does the actual supportcome from?
Speaker 2 (05:29):
It really throws a
spotlight on a systemic issue,
doesn't it?
A failure, perhaps, to providethe right professional
development, a clearinstitutional strategy around AI
?
It's a major blind spot and itimpacts everything curriculum
integrity, you name it.
And it's not just about grades,it's about jobs.
90% of educators, 89% of admins, believe AI readiness is
essential for future careers.
70% of students agree.
(05:49):
So this demand for AI skillsmakes the current knowledge gap
a really critical problem.
Ne needs urgent attentionreally.
Speaker 1 (05:55):
What's also worrying
is the lack of institutional
support.
37% of educators said theirinstitution just doesn't have
the resources for them to use AIeffectively, which I guess
pushes them to find their ownsolutions outside the system.
Speaker 2 (06:09):
Exactly, and that can
lead straight to inconsistency,
unfairness across differentclasses, different schools.
That inconsistency then feedsright back into academic
integrity concerns, another coretheme.
Despite AI adoption booming,the guidelines for proper use
are lagging.
Addressing these risks is justcrucial for maintaining academic
integrity.
(06:29):
It's not just about, you know,blatant copying.
It's subtler stuff too, likestudents using AI for ideas
without citing it or refiningwork so much it's barely their
own thinking anymore.
The lines get really blurrywhen AI feels like an uncredited
co-author, not just a tool.
That's where integrity takes ahit.
Speaker 1 (06:45):
And it's not just
about misuse, it's about the
learning process itself.
A key student worry reallystands out.
Fifty 59% are concerned thatrelying too much on AI could
actually reduce criticalthinking skills.
And that's not just a smallworry, is it?
If students use AI to generateideas or even drafts, are they
skipping the hard mental workthat actually builds those
(07:06):
thinking skills?
Is AI becoming a crutch?
Speaker 2 (07:09):
Precisely that
struggle wrestling with complex
ideas, building your ownarguments, pulling different
sources together.
That's fundamental to criticalthinking.
If AI does the pre-digestingfor them, the risk of, well,
intellectual atrophy is real.
It's less about the AI itselfand more about how it's used or
not used, maybe.
How do we make sure it enhancesthinking, not replaces it.
And this concern about criticalthinking it pops up across the
(07:31):
board.
Both secondary and highereducation institutions reported
the same top two challengesacademic integrity and lack of
expertise and training.
On AI.
Speaker 1 (07:39):
So it's not just a
college problem or a high school
problem, it's systemic,affecting all levels.
And they also flag the same toptwo risks misinformation or
misuse of AI and this loss ofinnovation and critical thinking
.
That alignment, that widespreadagreement on the core worries,
it feels like an opportunity,maybe, for a unified approach.
Speaker 2 (07:59):
It does seem that way
.
The report backs this up.
57% of everyone surveyed seesAI as a threat to academic
integrity.
And look if critical thinkingisn't being taught effectively,
if students aren't beingprepared properly, it really
calls into question the corepurpose of education itself.
The whole mission ispotentially at stake.
Speaker 1 (08:16):
Okay, as we start to
wrap this up, let's touch on the
global picture briefly.
The report mentioned somedifferences, right, and then
maybe pivot to what the reportsuggests as the path forward.
It's clearly not the sameeverywhere.
Speaker 2 (08:26):
That's right.
The level of positivity aboutAI's impact it varies quite a
bit.
India, for instance, reported93% positivity, Mexico 85%.
Much higher than the US at 69%or the UK Ireland at 65%, Could
be down to different exposurelevels, policies, maybe even
cultural views on new tech.
Speaker 1 (08:45):
Interesting.
So the enthusiasm varies, butthat feeling of being
overwhelmed, the uncertaintythat sounds more universal, even
if they're positive overall.
Speaker 2 (08:54):
That's spot on.
The overwhelm is widespread.
India actually had the highestconcern level at 85 percent,
despite the high positivity, andglobally, half 50 percent of
students just don't know how toget the most benefit from AI.
That uncertainty is high inMexico 51 percent, india 50
percent, the UK Ireland 47percent.
It really shows a globalguidance gap.
Speaker 1 (09:16):
And institutions are
definitely playing catch up with
strategy.
Only 28 percent have fullyintegrated AI into their
strategic plans.
That seems really low, suggestsa lot of reaction, not much
proaction in this fast-movingarea.
Speaker 2 (09:28):
It points to a
significant lag.
Yes, the way forward, accordingto the report, it really hinges
on urgent open communicationand a real collaboration between
everyone students, educators,administrators plus, crucially,
developing clear guidelines onacceptable AI use Tailored
guidelines too, because using AIfor coursework is different
from exams, which is differentfrom revision.
Right Context is key.
Speaker 1 (09:48):
On a hopeful note, I
saw that 33% of students in
higher ed say they are involvedin making new AI policies.
That sounds really positive,like students are willing to
step up and be part of findingsolutions.
Speaker 2 (10:00):
It's a very positive
sign.
Yeah, it suggests acollaborative future is possible
if institutions open that door.
And looking ahead, theexpectation is huge 92% of
educators, 88% of studentsexpect AI's role to expand
significantly in the next two tothree years.
The future is definitelyAI-infused, ready or not or not,
(10:25):
wow, okay.
Speaker 1 (10:25):
So let's just recap
the core tension we've explored.
There's this widespreadpositivity about AI's potential
in education, but it's sharplycontrasted by an equally
widespread belief that it'sbeing misused.
And underneath it all, there'sthis significant knowledge gap,
this resource gap, especiallyfor students and teachers, plus
just basic confusion about whatmisuse even means.
Speaker 2 (10:41):
And the
responsibility for bridging that
gap.
It seems pretty clear confusionabout what misuse even means
and the responsibility forbridging that gap.
It seems pretty clear.
86% of respondents agree.
It's up to institutions toeducate students on how to use
AI ethically and effectively.
This isn't something you canjust leave to chance or hope
individuals figure out on theirown.
Speaker 1 (10:55):
Absolutely, and one
educator and source put it so
well we need the human touchalways.
The report notes that withoutenough human interaction, that
guidance students might justfeel disengaged, shortchanged,
even it's not just about thetech.
Speaker 2 (11:12):
It's about connection
, critical thinking, human
development.
That's real learning.
That human element is justparamount, and it leaves us with
a big question, doesn't it?
If AI can handle a lot of thebasic information processing,
which it clearly can, what arethe new higher order skills that
will really mark out awell-educated person in the
future, and how will schools anduniversities evolve to actually
prioritize and teach thoseessential human skills?
(11:34):
Things like critical thinking,creativity, empathy the stuff AI
can't replicate.
How do we make sure thoseremain central to what it means
to be truly educated and capablein this new AI-powered world?
It's definitely something tothink about as we look ahead.