All Episodes

June 13, 2025 20 mins

Spill the tea - we want to hear from you!

We take a comprehensive look at using artificial intelligence safely and effectively in education by examining guidance from the UK Department for Education and related sources. This deep dive unpacks the opportunities, challenges, and key considerations for anyone dealing with AI in educational settings.

• Understanding generative AI as a tool that creates new content based on massive datasets and pattern recognition
• The critical importance of crafting detailed, clear prompts to get quality output from AI systems 
• Maintaining human oversight as AI lacks true understanding and is simply making predictions based on training data
• Safeguarding concerns including exposure to harmful content and the creation of convincing deepfakes
• Data protection as a paramount consideration under UK GDPR with strong warnings against using free AI tools for student data
• The surprising environmental impact of AI systems which consume electricity equivalent to entire countries
• Strategic implementation requiring proper planning aligned with school development goals
• Practical applications including generating teaching resources, personalizing learning, and streamlining administrative tasks
• The need for schools to develop clear AI policies covering usage, data handling, and academic integrity

How do we best equip everyone - students, teachers, leaders - with the critical thinking skills they need to navigate this complex new landscape responsibly, ensuring the technology serves learning and not the other way around?


Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
AI 1 (00:00):
Welcome to the Deep Dive.
Today we're taking a reallyclose look at a whole stack of
sources about using artificialintelligence safely and
effectively in education.
We've gathered up a recent UKDepartment for Education
guidance plus, you know, relatedvideos, transcripts, documents,
trying to get a clear pictureof things right now.

AI 2 (00:19):
That's right, and our job really is to sort of cut through
all the noise, unpack thesesources and pull out the most
important bits.

AI 1 (00:26):
Exactly the opportunities, the challenges, the things you
really need to think about ifyou're dealing with AI in
schools or colleges.

AI 2 (00:34):
Yeah, so think of this as like your shortcut to getting up
to speed.

AI 1 (00:38):
Right, whether you're leading a school in the
classroom or just really curiousabout how AI is changing
learning.
Okay, let's dive in.
When we talk about AI ineducation, what do we actually
mean?
Because it feels like it's kindof everywhere already, but
there's a specific focus now,isn't there?

AI 2 (00:55):
absolutely, you're spot-on .
Ai is in loads of familiarstuff spam filters, predictive
text on your phone right thingswe don't even notice exactly.
You might not.
Might not label it AI, but itis For this deep dive, though.
We're really zoning in ongenerative AI, that's the AI
that actually creates new stuffText images, audio video, even
computer code.

AI 1 (01:15):
Oh, got it.
The kind of AI that can, youknow, write an essay, draft or
make a picture from adescription.
So how does that work?
Like in simple terms, based onthe sources.

AI 2 (01:26):
Well, the sources break it down pretty simply.
It's basically built on machinelearning computers learning
from data, okay, and thenthere's deep learning, using
these complex sort of brain-likeneural networks that lets them
process huge amounts ofinformation and generate
something new.

AI 1 (01:40):
And that leads us to the large language models, the LLMs,
things like ChatGPT, Gemini,Copilot.

AI 2 (01:47):
Exactly those.
They're the big examples.
They get trained on absolutelymassive data sets, learning
patterns in language or imagesor code.

AI 1 (01:55):
Right.

AI 2 (01:55):
And that lets them predict what comes next in the next
word, the next pixel.
That's how they generate stuffthat feels so human-like or
really creative.

AI 1 (02:03):
It almost sounds like they understand it, but the sources
used a helpful way to thinkabout it like a black box.

AI 2 (02:10):
Yeah, that's a really useful analogy.
You put something in thatyou're prompt, something
complicated happens inside thebox you don't see how exactly,
and then you get something out.
The AI's response.

AI 1 (02:20):
Okay, and this is where it gets really critical for
educators, right yeah, Becausethe quality of what comes out it
hugely depends on what you putin.
The sources really stressed atthis point they really did.

AI 2 (02:31):
A detailed, clear prompt makes all the difference.
We saw examples.
You know.
Like asking for a quiz.

AI 1 (02:37):
Yeah.

AI 2 (02:38):
Just saying make a quiz about light gives you something
well, pretty generic, Right.
But if you specify, say, createa 10-question multiple-choice
quiz, include the answer keythis is for UK year three
science based on the nationalcurriculum.
The topic is light Then you getsomething much more useful.

AI 1 (02:56):
Ah, much more targeted.

AI 2 (02:57):
Exactly, and the sources even mention tools like AILA
from Oak National Academy thatare specifically designed to
help teachers with this,generating stuff that's already
aligned to the curriculum.

AI 1 (03:07):
So it's not just the AI's power, it's our skill in asking
the right questions.

AI 2 (03:12):
Precisely.
And that links straight toanother really really critical
point from the sources.
Yeah, AI is not a human expertfundamentally.
Yeah, Its output is just basedon prediction from its training
data.
It doesn't understand thingslike a person does.

AI 1 (03:25):
Which means you can't just take its word for it.

AI 2 (03:28):
Exactly that.
You always need human oversight.
You've got to check the outputs.
Are they accurate?
Is there bias?
Is it actually relevant foryour students?
Your context?

AI 1 (03:38):
Right because the source is warned about American
spellings creeping in oroutdated teaching ideas.

AI 2 (03:43):
Or stuff based on, say, the American education system.
The AI doesn't know what'scurrent or right for your
specific school, it's justgiving you the statistically
most likely answer based on itstraining.
Okay, and the sources arecrystal clear.
As the educator, you'reprofessionally responsible for
the prompt you use and for theoutput, how you check it, adapt

(04:03):
it, use it.

AI 1 (04:05):
So cross check with your curriculum.
Official guidance, use your ownexpertise indispensable.

AI 2 (04:10):
Your knowledge is key okay .

AI 1 (04:12):
So understanding that input, output thing and the
absolute need for humanoversight, well, that
immediately brings the risksinto view, doesn't it?
It does the sources spend a lotof time on risks and
safeguarding.
One expert even said you lotsof opportunities but definitely
risks to navigate.

AI 2 (04:29):
Yeah, and the advice was quite pragmatic Learn fast, but
act more slowly.
Don't feel pressured to justjump in and adopt everything
until you've really understoodit and figured out your strategy
.

AI 1 (04:41):
That feels like solid advice, especially when you see
how much kids are already usingthis stuff.

AI 2 (04:45):
It's vital.
The sources mentioned an Ofcomstudy from 2024, something like
54% of 8 to 15-year-olds Wow,and 66%, so two-thirds of 13 to
15-year-olds in Britain had usedgenerative AI in the past year.

AI 1 (04:59):
That's huge.

AI 2 (05:00):
It is, and over half of those 8 to 15 said they used it
for schoolwork.
So you know it's alreadyhappening.
Educators need to be informedto guide it properly.

AI 1 (05:07):
Definitely makes this conversation essential.
Okay, so the sources flag somekey risk areas.
What's the first big one?

AI 2 (05:14):
Exposure to harmful content.
That's a major worry.

AI 1 (05:17):
How so.

AI 2 (05:18):
Well, generative AI makes it super easy to create really
realistic images avatars.
So there are concerns aboutkids potentially seeing
inappropriate stuff or evengrooming risks through chatbots
that seem very human.

AI 1 (05:30):
And this connects to wider security issues too.

AI 2 (05:33):
It does.
Yeah, the UK's counterterrorismstrategy was mentioned,
highlighting the risk of AIbeing used to create and spread
fake propaganda.
Deepfakes on a massive scale,potentially overwhelming the
systems trying to moderatecontent.

AI 1 (05:48):
So educators are kind of on the front line helping
students spot fakes andmisinformation.

AI 2 (05:52):
Exactly.
Teaching those criticalthinking skills is vital.
How to question what you see,recognize misinformation,
challenge extremist ideas.
That fits right in with theprevent duty.
Safeguarding students and goodfiltering and monitoring systems
are obviously crucial too.

AI 1 (06:08):
Okay, what's another risk?
I know inaccuracy and bias cameup a lot.

AI 2 (06:11):
Yes, Because AI learns from these enormous data sets,
often just scraped from theinternet.
Its outputs can be wrong orbiased, Reflecting the biases
already out there in the worldracial, gender stereotypes, that
kind of thing.

AI 1 (06:26):
And bias can be in the algorithm itself or even in the
question we ask.

AI 2 (06:30):
Both Bias can be sort of baked into the algorithms and
definitely prompt bias how youphrase the question can steer
the AI.

AI 1 (06:39):
Ah, like that example they gave about maths.

AI 2 (06:41):
Yeah, that was a good one.
Asking why do students strugglewith maths sort of assumes they
do struggle right, whereasasking what factors influence
students experiences withlearning maths is more neutral.

AI 1 (06:54):
It allows for successes and challenges a much better
prompt that really shows how ourown assumptions can shape the
ai's answer it does highlightsthe need for critical thinking
from us too.

AI 2 (07:04):
And then there are are hallucinations.

AI 1 (07:05):
Where the AI just makes stuff up.

AI 2 (07:07):
Basically, yes, it generates content that sounds
plausible but isn't actuallytrue or accurate.
Sometimes with real confidence.

AI 1 (07:14):
So the takeaway again is Check everything.

AI 2 (07:16):
Cross-reference with trusted sources, curriculum
plans, official guidance.
Use your professional judgment.
Don't just copy and paste.
Never just copy and paste.

AI 1 (07:26):
Okay, then there's data protection UK GDPR.
That must be a hugeconsideration with student data.

AI 2 (07:31):
Paramount, absolutely paramount, under UK GDPR.
If you process personal datanames, photos, assessment
results, anything identifiableyou must have a lawful basis.
Using those free, public AItools is really risky because
you don't control where thatdata goes.
It might get stored somewhereelse, maybe even used to train
the AI model without your OK andit could be accessed by people

(07:53):
who shouldn't see it.
And children's data needs extraspecial care, access correction
and, importantly, the right tobe forgotten, especially if they
agreed to something as a childwithout fully grasping the risks
Using their data, particularlysensitive stuff in general AI
tools that could be a seriousGDPR breach.

(08:14):
So the really strong advice fromall the sources is only use AI
tools that your school orcollege has officially approved
and provided.

AI 1 (08:22):
The enterprise versions.

AI 2 (08:23):
Usually yes, because those have been checked out.
They've likely had dataprotection impact assessments
done.
They have better safeguards.
The message is loud and clearDon't put sensitive or personal
data into general AI toolsunless your institution has
specifically said it's safeafter a proper assessment.

AI 1 (08:41):
That sounds like a hard and fast rule Okay, link to data
Intellectual property IPinfringement yeah, different,
but rule Okay.
Link to data Intellectualproperty IP infringement.

AI 2 (08:47):
Yeah, different but related.
Yeah, this is about creativework, lesson plans, resources
and, importantly, student work.
Right Copyright belongs to thecreator.
So if you put student work sayan essay for AI marking into a
tool that learns from the datayou feed it, you could be
infringing that student'scopyright, unless you have their
permission, or their parents,if they're minors.

AI 1 (09:08):
And the AI itself could spit out copyrighted material.

AI 2 (09:11):
That's the other risk secondary infringement If the AI
learned from stuff it wasn'tlicensed to use and then its
output includes that, likecopying text from another
school's website or generatingan image based on copyrighted
art you could be liable if youuse it.

AI 1 (09:29):
So the mitigation is use tools that don't train on inputs
, get permissions, betransparent.
Be careful sharing AI stuffpublicly.

AI 2 (09:34):
Exactly, transparency is key, especially with student
work.

AI 1 (09:38):
Okay, the last big risk area they covered was academic
integrity Students using AI forassignments.

AI 2 (09:45):
Yeah, a massive challenge and the sources were pretty
blunt.
Those AI detection tools,they're just not reliable.
They throw up false positives,unfairly flagging students,
maybe those for whom Englishisn't their first language.

AI 1 (09:58):
Yeah.

AI 2 (09:58):
And they often miss well-hidden AI use anyway.

AI 1 (10:01):
So relying on detectors isn't the way forward.

AI 2 (10:03):
Not really no.
The sources strongly emphasizeusing your professional judgment
, Knowing your students' usualwork, spotting inconsistencies,
it's much more effective.
Jcq guidance is clear Work mustbe the student's own.
Unacknowledged AI use ismalpractice.
But it's not just aboutcheating in the old sense.

AI 1 (10:20):
It's about whether they're actually learning anything.

AI 2 (10:22):
Precisely.
If you just rely on AI, youbypass the actual learning, the
critical thinking.
Schools need really clearpolicies on AI use and
assignments need to be designeddifferently, perhaps focusing
more on process reflectionthings AI can't easily fake.

AI 1 (10:39):
And talking to students about it Essential.

AI 2 (10:42):
Discussing the risks, the ethics, helping them see that
just getting an AI answer isn'ta substitute for genuine
understanding and effort.
And again, giving them accessto approved, safe tools helps
guide them towards responsibleuse.

AI 1 (10:54):
That's a really thorough rundown of the risks.
But there's another angle.
The source has brought up onethat might catch people by
surprise the environmental cost.

AI 2 (11:02):
Yes, this is sustainability.
Generative AI has asurprisingly significant
environmental footprint.

AI 1 (11:07):
How so.

AI 2 (11:08):
Well, these systems need huge amounts of electricity
powering all those servers inmassive data centers around the
world.

AI 1 (11:14):
Thousands of them.

AI 2 (11:15):
Around 7,000 globally, apparently, and they need
constant cooling.
Altogether, they use moreenergy than many entire
countries.

AI 1 (11:21):
Wow, that puts a massive strain on energy supplies, even
renewables.

AI 2 (11:25):
It does.
It makes it hard even for thebig tech companies to hit their
own carbon goals, because AI useis soaring.

AI 1 (11:33):
And it's not just energy, it's water too.

AI 2 (11:35):
Right For cooling.
Again.
Just mentioned, an averagelarge data center uses something
like 2.1 million liters ofwater every single day.

AI 1 (11:45):
That's staggering, often in places already short on water
Exactly, and even small AItasks add up like a search query
.

AI 2 (11:52):
Comparatively yeah.

AI 1 (11:53):
Yeah.

AI 2 (11:54):
An AI search can use maybe 10 times the energy of a normal
Google search generating oneimage.
It could use as much energy ascharging your phone halfway.
It really makes you stop andthink.

AI 1 (12:03):
Is there any positive news on this front?

AI 2 (12:05):
Mitigation- Well, tech companies are investing a lot in
renewables and there'spotential for AI itself to help
tackle climate change, maybethrough complex modeling, but
the immediate energy and waterdemand is definitely a big
concern right now.
Ok, the sources did mention,though, that smaller, maybe more
efficient AI models areexpected around 2025, which
should help.

AI 1 (12:25):
But until then, it sounds like we need to be conscious
users.

AI 2 (12:28):
Definitely Just being mindful.
Do I really need AI for thisspecific task, or would a
standard search or using aresource I already have be more
efficient and well better forthe planet?
It's about thinking about thewider impact of our choices.

AI 1 (12:43):
Okay, so we've looked at what AI is.
The pretty significant risks,the environmental side.
With all that on the table, howdid the sources suggest schools
and colleges actually go aboutusing AI safely and effectively?
Marc Thiessen.

AI 2 (12:56):
Well, the DfE guidance is quite positive really.
It says AI can genuinelytransform things, help teachers
focus more on teaching, but itneeds safe, effective
implementation and the rightinfrastructure.
Leaders really need to graspboth the potential and the
pitfalls.

AI 1 (13:11):
Yeah, it sounds like something that needs a proper
plan, not just, you know, buyingsome new software.

AI 2 (13:16):
Absolutely.
It needs a strategy.
It should tie into yourschool's wider digital plan,
your development plan.
The guidance even suggestschecking it against the DFE's
existing digital and techstandards.

AI 1 (13:26):
What kind of practical things should leaders be
thinking about, based on thesources?

AI 2 (13:30):
Quite a few key things came up.
Obviously, ensuring you'remeeting safeguarding duties
Keeping children safe ineducation is fundamental.
Making sure AI use aligns withyour school's educational
philosophy, developing orupdating policies, data
protection, ip safeguardingethics that's crucial.
Planning for any infrastructureupgrades needed, setting up

(13:51):
support teams.
Evaluating tools before youcommit, monitoring how AI is
being used.
Maybe even setting up an AIsteering group to guide the
whole process.

AI 1 (14:00):
I remember seeing some specific tips for college
leaders from JISC too.

AI 2 (14:04):
Yes, jisc had five clear actions.
Lead by example, use the toolsyourself, set boundaries, clear
guidelines for exploring AI.
Invest in staff training that'svital.
Create an AI culture, encouragecuriosity and critical thinking
about it.
And collaborate with industry.
Understand how AI is changingthe workplace students are
heading into.

AI 1 (14:23):
All sounds very sensible but running through all of that,
the absolute core messageseemed to be about keeping
humans in control.

AI 2 (14:29):
That's the golden thread.
Absolutely, you always maintainhuman oversight.
You never outsource yourprofessional judgment, your
thinking, your decisions to anAI Right AI is positioned as a
tool to support humans support,expertise, interaction, judgment
.
That human element remainstotally central to education.

AI 1 (14:49):
So it's about empowering people with AI, not replacing
them.

AI 2 (14:52):
Precisely, and the sources suggest a kind of phased
approach to rolling it out likeexplore first, assess your needs
, check your tech, talk toeveryone then prepare, then
deliver.
Trained people monitor how it'sgoing, get feedback and then
sustain, embed it in yourstrategy, keep policies updated.
Review tools Keep talking aboutit.
They even mentioned using audittools to help figure out where

(15:14):
you are now and plan the nextsteps.

AI 1 (15:17):
Right, let's switch gears a bit.
What does this actually looklike in practice, in the
classroom, in the school office?
The sources gave some reallyinteresting examples of safe and
effective uses.

AI 2 (15:25):
Yeah, they broke them down into sort of supporting
teaching, personalizing,learning and admin tasks.

AI 1 (15:32):
Okay, supporting teachers first.

AI 2 (15:33):
Some great examples there AI generating lesson resources
quickly, like thatphotosynthesis plan that
included differentiation ideas.
Creating quizzes from maybe ablock of text.
You have Breaking down complextext for different reading
levels.
Simplifying that geography textwas a good example, I like the
creative stuff too.
Yes, getting AI to generatelike a rap about the planets or

(15:56):
mnemonics or maths problems setin fun context, like that
Avengers Fraction problem, anddrafting routine things, emails
home, adapting the tone, helpingdraft policies, even helping
plan the logistics for a schooltrip.
Lots of workload reducers.

AI 1 (16:12):
And personalized learning.
That sounds like a bigpotential area.

AI 2 (16:15):
Huge potential.
Yes.

AI 1 (16:16):
Yeah.

AI 2 (16:16):
Helping teachers adapt resources for kids with specific
needs, like that computerscience lesson adaptation
mentioned.

AI 1 (16:22):
Okay.

AI 2 (16:23):
Generating personalized learning plans, but always with
a teacher overseeing it, becausethey know the child.

AI 1 (16:27):
Right.

AI 2 (16:28):
There was one really powerful example An automotive
teacher using an AI tool trainedonly on his own curated
teaching materials.

AI 1 (16:37):
Ah, so a closed system.

AI 2 (16:38):
Exactly Safe data.
It meant students could ask itquestions and get tailored info
based only on what the teacherapproved.
It could even generate podcastsummaries for accessibility
Really clever.

AI 1 (16:49):
That's a fantastic use case, assuming all the data
handling and permissions weresolid.

AI 2 (16:54):
Absolutely.
Safeguards are paramount.
What about admin tasks?
That seems like a natural fitfor AI.

AI 1 (16:59):
Definitely.
Things like getting firstdrafts of policies, summarizing
long documents, checkingpolicies against new laws, maybe
helping with timetabling orstructuring development plans
all stuff that could save a lotof time.

AI 2 (17:11):
And using data for insights.

AI 1 (17:13):
Yes, and this warning came up again and again never put
individual student personal datainto general AI tools.
Okay, critical point.

AI 2 (17:21):
But using anonymized data with approved, secure tools that
can help spot patterns,attendance trends, performance
across cohorts, like visualizingGCSE results against Key Stage
2 scores, perhaps to see overallpatterns, not tracking
individuals publicly.

AI 1 (17:36):
So data analysis possible, but with extreme caution on
personal info.

AI 2 (17:40):
Absolute caution, Transparency, lawful basis under
GDPR essential.

AI 1 (17:45):
Okay, and finally, what about students themselves using
AI safely?

AI 2 (17:50):
Well, if the tools are provided by the institution with
the right safeguards like nottraining on student inputs,
having monitoring then studentscan use LLMs for things like
research that EPQ studentexample using a college tool for
research questions wasmentioned, but the key is
teaching them to verify thefacts themselves and to credit

(18:11):
the AI properly.
Digital literacy.

AI 1 (18:13):
And giving them safe tools helps bridge that digital
divide too right If they can'tafford premium tools.

AI 2 (18:19):
Exactly and students can use AI creatively with guidance
like that image generator forcreative writing prompts.
Some places are being reallyupfront explaining AI policies
at enrollment, settingexpectations early.

AI 1 (18:31):
And there was that framework for writing good
prompts, f-c-t-s yeah that's ahandy one.

AI 2 (18:36):
Focus the prompt, clear, concise, analyze the output,
check it carefully, check forbias actively look for it,
tailor suitability, make sure itfits your context and
strengthen the prompt, refine itbased on what you got back.
A good little checklist.

AI 1 (18:49):
So, wrapping this all up, what's the big picture from this
deep dive?
It seems AI offers somegenuinely exciting ways to
improve education.

AI 2 (18:57):
Definitely Easing teacher workloads, creating dynamic
resources personalizing learning.

AI 1 (19:04):
the potential is clearly there, but and it's a big but
using it safely means reallygetting to grips with the risks.
Safeguarding data protection,ip, academic integrity, even the
environment.
These aren't side issues,they're central.

AI 2 (19:18):
Absolutely.
The core message really fromall the sources is be strategic,
be considered.
Human oversight and judgmenthave to stay front and center.
Be transparent.
Use approved tools with propersafeguards.
Develop clear policies.

AI 1 (19:31):
It's obviously moving incredibly fast this whole area,
but it sounds like if we stickto those core principles keeping
students safe, focusing on reallearning we can hopefully
harness the good bitsresponsibly.

AI 2 (19:40):
That's the goal, which, I suppose, leaves us with a really
important question for you, thelistener, to think about.
As AI gets more woven intoeducation and we know young
people are using it a lotalready how do we best equip
everyone students, teachers,leaders with the critical
thinking skills they need,skills to navigate this complex
new landscape responsibly,making sure the technology
serves learning and not the?
Advertise With Us

Popular Podcasts

Cold Case Files: Miami

Cold Case Files: Miami

Joyce Sapp, 76; Bryan Herrera, 16; and Laurance Webb, 32—three Miami residents whose lives were stolen in brutal, unsolved homicides.  Cold Case Files: Miami follows award‑winning radio host and City of Miami Police reserve officer  Enrique Santos as he partners with the department’s Cold Case Homicide Unit, determined family members, and the advocates who spend their lives fighting for justice for the victims who can no longer fight for themselves.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.