All Episodes

June 29, 2025 11 mins

Spill the tea - we want to hear from you!

We explore the UAE's comprehensive green paper on generative AI in education, unpacking a multi-level framework that addresses both opportunities and challenges of AI integration in learning environments. This three-tiered approach examines policy, institutional implementation, and classroom experience to create a balanced roadmap for educational transformation.

• The UAE demonstrates leadership in AI education through national strategies and concrete actions like developing AI tutors for their curriculum
• International frameworks including UNESCO's AI Competency Framework inform the UAE's ethical approach to educational AI
• The Human-Machine Interaction quadrant provides institutions with a structured way to implement different types of AI assistance
• Professional development for teachers must focus on practical benefits and ease of use, not just technical training
• Academic integrity requires clear policies on appropriate AI use alongside verification methods like oral examinations
• Critical questions emerge about equitable treatment across subjects and potential cultural biases in AI systems
• The future of education will require rethinking what it means to "know" something in an age of instant AI assistance

Link to original Green Paper.



Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to the Deep Dive.
Today we're jumping intosomething huge, really evolving,
fast, generative AI ineducation.
We're looking at it throughthis really comprehensive green
paper from the UAE.
So our mission, basically, isto unpack the key insights, the
opportunities, which are massive, but also the challenges.
Think of it as your shortcut togetting up to speed on AI and

(00:20):
the future of learning.

Speaker 2 (00:22):
Yeah, and what's great about this green paper is
its structure.
It uses these 3M levels macro,meso and micro to break down
GAI's impact.
And the UAE?
Well, that's a really relevantplace to look, isn't it?
They've had things like the UAECouncil for AI and blockchain
for a while now, plus appointinga minister of state for AI way
back in 2017.

Speaker 3 (00:56):
That really put them on the map, kind of at the
forefront of this whole AI push.
Also, the absolute need for asolid framework to handle the
tricky stuff and there is trickystuff making sure access is
fair, keeping learner humancentered, protecting
intellectual growth, thepsychological side of things and
, crucially, needing humans toactually check what the AI puts
out.

(01:16):
You know, fighting bias, makingsure it's accurate, hashtag,
tag, tag, core segments.
So let's start at the top, themacro level.
What's the UAE's big teacher?
The grand vision for generativeAI, how's it shaping policy,
ethics, that sort of thing.

Speaker 2 (01:27):
Well, their vision isn't just a vague idea.
It's tied into nationalstrategies like Vision 2021 and
Centennial 2071.
These are serious plans aimingfor tech leadership and making
sure pretty much everyone hassome AI literacy.
And we're seeing real action.
They've launched a GAI guidewhich is pretty detailed, even
covering education uses, plusbig pushes to train teachers,

(01:47):
give them AI skills.
And what's really interesting,I think, is the Ministry of
Education's own initiativedeveloping and launching AI
tutors.
They've even partnered with ASIyou might know them as Digest
AI before to build an AI tutorspecifically for the UAE's
national curriculum.
So the takeaway is this veryproactive, top-down integration,
making AI literacy core, notjust an add-on and ethics

(02:08):
governance.
The green paper is reallythorough here.
It pulls in major internationalframeworks You've got UNESCO's
2024 AI Competency Framework.
That covers the basics ethics,practical skills, being
adaptable and it leans heavilyon UNESCO's 2021 recommendation
on AI ethics, emphasizing, youknow, keeping it human centered,
respecting rights, culturaldiversity.
Also, the 2019 Beijingconsensus on AI and education

(02:32):
gets a mention, calling for fairaccess, getting everyone
involved in making policy.
So these aren't just namedropped guidelines.
They seem to be genuinelyshaping the UAE's approach from
the classroom up, which is maybedifferent from places where
ethics feels more like a topdown memo.

Speaker 1 (02:44):
Yeah, and that connects to the real world,
doesn't it Like?
The World Economic Forum keepstalking about AI for
personalized learning and thatIBM index from 2023, 42% of UAE
companies already using AIthat's a lot.
It really highlights this needfor different skills in the
workforce, doesn't it?
Critical thinking, yeah, butalso that ethical understanding

(03:05):
feels more important than ever.
Is the paper hinting?
We need to rethink whatfundamental skills even are?

Speaker 2 (03:10):
Absolutely and stepping back.
That brings us to the coreethical pillars.
They focus on preventing bias,keeping data private and being
transparent and accountable.
So, for instance, that AI tutorproject from the ministry.
It specifically has built-inprivacy stuff and aims for
culturally appropriate contentfitting local values.
The paper even floats the ideaof an ethical AI toolkit like a

(03:32):
practical guide for teachers onhow to use AI responsibly.
It's all about making AI a toolthat helps teachers, not
something that clashes withtheir values or adds complexity
without benefit.

Speaker 1 (03:41):
OK, but that raises a tough question, I think what
does equity in AI really meanhere?
Is it just okay everyone gets alogin, or is it something
deeper Like does the AI treatsubjects fairly?
Math seems straightforward,maybe, but what about history or
literature?
Could biases in the trainingdata subtly skew things,
depending on the subject?

Speaker 2 (04:00):
That's a really good point, and it's complex.
The paper uses a neat analogy.
It compares AI and educationnow to early mobile phones.
You know those first brickphones.
They worked, they were valuable, but they didn't totally
transform everything likesmartphones did later.
Ai is kind of like that now ineducation.
It's useful, definitely, but ithasn't fully woven itself into
the fabric and changedeverything yet, and because it's

(04:22):
still relatively early days,that makes universal access
tricky and also makes it harderto ensure it works equally well
across all subjects, especiallywhere, yeah, cultural biases
might creep in.
We're still figuring it out,basically Scaling up from useful
tools to genuinelytransformative systems.

Speaker 1 (04:38):
Right, okay.
So if macro is the big strategy, mezzo is where the rubber
meets the road the institutionsschools, universities.
It makes me think of thecalculator again.
First it was banned cheating,now it's essential.
So the question for theseinstitutions is how do you tell
the difference between AI usethat's genuinely transformative
and AI that's just the latestgadget?

Speaker 2 (05:00):
Exactly.
And the paper doesn't shy awayfrom the big questions Like the
latest gadget?
Exactly.
And the paper doesn't shy awayfrom the big questions like
could these intelligent tutoringsystems, its, actually replace
human teachers entirely?
And the ethics of that, wow.
It explores ideas like shouldAI behave ethically like a human
, or should we aim for some kindof ultimate moral machine with
higher standards?
But and this is key, it alwayscomes back to needing humans in

(05:22):
the loop for oversight, forquality control.
So, while replacement is maybea theoretical possibility down
the line, the paper reallygrounds it in keeping educators
in charge focused on the learner, using AI, not being replaced
by it.

Speaker 1 (05:34):
Soterios Johnson.
Keeping humans in the loopmakes a lot of sense, and the
green paper offers thispractical tool the human machine
interaction or HMI, quadranttypology.
How does that actually helpinstitutions figure out policy?

Speaker 2 (05:47):
It gives them a framework, a map, really.
It breaks down AI use into fourtypes.
You've got full automation AIruns the show, like generating
reports automatically.
Then collaborative interactionAI helps students work together,
maybe facilitating groupprojects.
Then full human control AIhelps the teacher, say, with
lesson plans, but has zeroautonomy.
The teacher drives everything.

(06:07):
And finally, assisted by AI, aihandles routine tasks like
grading, multiple choice ortracking attendance, freeing up
the teacher.
So institutions can use thesequadrants to think about pilots,
develop guidelines.
It makes the whole process lesschaotic, more intentional.

Speaker 1 (06:24):
Okay, implement.
Implementing is one thing, butwhat about quality?
How do schools make surethey're using AI well and, you
know, ethically?

Speaker 2 (06:30):
That's critical.
The paper emphasizes needingclear benchmarks.
We need to measure if GAI isactually improving learning
outcomes, not just making thingsquicker.
It's about moving pastconvenience to see real
educational value and, alongsidethat, strong ethical guidelines
on transparency, data privacy,making sure AI helps learning

(06:50):
without creating new problems orrisks, which ties straight into
getting teachers ready.
The paper really stressesprofessional development and it
links it to the technologyacceptance model, TAM.
Basically, Teachers need to seethat these tools are genuinely
easy to use and actually helpthem in their real classrooms.
It has to have practicalbenefits, Otherwise why bother?
So?
Training programs, yes, butalso getting feedback, maybe

(07:13):
through surveys, to make surethe training and tools actually
work for them.
Make AI intuitive, not anotherburden.

Speaker 1 (07:18):
And all of this has to feed back into the curriculum
right Building in AI, literacy,critical thinking skills.
Are we talking new courses oryeah, more like weaving it in.

Speaker 2 (07:27):
The paper suggests things like dedicated AI ethics
modules becoming standard sostudents really grapple with the
implications, but also usingGAI simulations, say in STEM
subjects for hands-on problemsolving.
So they're learning with AIwithin their existing subjects,
not just about AI separately.

Speaker 1 (07:49):
Okay, let's zoom right in now to the classroom,
the micro level, the studentexperience and the paper flags
this really fascinating, almostweird loop the AI on AI
assessment.
Student uses AI to write anessay.
Ai grades the essay.
My first reaction is isn't thatjust cutting out the learning
bit?
How does the paper suggest westop students just relying on it
and actually push them to think?

Speaker 2 (08:04):
That's a huge concern , definitely, and it connects to
that earlier point aboutequitable treatment across
subjects.
It's maybe easier to spot AIwriting in, say, a technical
report than in a history essay,where nuance and interpretation
matter more.
And should AI even try to beneutral in subjects like history
or literature, where culturalcontext is so important?

(08:25):
What if the training data hasbiases?
The paper flags this as needingserious thought for policy.
How do we manage thosepotential biases from, you know,
non-local training data?
How do we make sure studentsstill engage critically with
sources?
It's about making AI culturallyaware or at least acknowledging
its limitations, not justaccepting a global default.

Speaker 1 (08:44):
But despite the challenges, there's also the
flip side GAI's power foradaptive, personalized learning,
real-time support.
That fits perfectly with ideaslike constructivist learning
theory, right Letting studentsbuild knowledge at their own
pace.
So it's more than just a fancye-book.
It's about platforms thatgenuinely adapt, maybe
simulations for real-worldproblems that adjust to the
student.

Speaker 2 (09:05):
Absolutely.
But then academic integritycomes roaring back.
Assessment the paper stressesneeding crystal clear policies
on what's OK and what's not OKregarding GAI use in assignments
, exams, and not just rules butways to check, like maybe using
VivaVoice, more oral exams ifmisuse is suspected, to see if
the student truly understandsthe work, or using authorship

(09:26):
authentication software checkingoriginality, making sure it's
really the students thinking.

Speaker 1 (09:30):
even if AI tools were used appropriately as aids, and
a massive piece of this puzzleis the students themselves their
AI literacy, their skills.
It's not just can you use chat,gpd, it's, do you understand it
?

Speaker 2 (09:41):
Precisely the Green Paper says we need to embed
ethical AI use right into thecurriculum, get students doing
collaborative projects, using AItools responsibly, regular
training on AI ethics and,interestingly, it also mentions
raising awareness about AI'senvironmental footprint.
Thinking about responsible usein a broader sense.
Zooming out slightly, gii isalso changing research, isn't it

(10:02):
?
Literature reviews,synthesizing info.
It offers amazing tools forfinding patterns, summarizing
vast amounts of text.
But big challenges tooReliability, authenticity.
How do we make sure studentsare still digging into primary
sources?
How do we guard against AIconfidently presenting
misinformation?
How do they cite AI usetransparently and again,

(10:22):
mitigating those historical orcultural biases?
These are huge questions fordeveloping real research skills
today.
Hashtag, hashtag outro.

Speaker 1 (10:31):
So, as we wrap up this deep dive, it feels like
this green paper from the UAEoffers a really solid,
multilayered way to think aboutgenerative AI in education.
It doesn't just height thebenefits, it gives a very
clear-eyed view of the risks tooA balanced perspective which is
refreshing.

Speaker 2 (10:45):
Yeah, I agree, and looking at the bigger picture,
it provides such a strongfoundation for well, actual
policy discussions and realstrategies.
It really positions the UAE'seducation system to be a leader
in integrating GAI responsiblyand honestly preparing students
for the future that's alreadyarriving.
It's definitely a blueprintothers could look at, and it
leaves us with a reallyinteresting question to mull

(11:07):
over, doesn't it?
As AI becomes more and morepart of education and we really
push for these human-centeredapproaches, what parts of our
human creativity, our criticalthinking, are going to be
amplified, and maybe what newways will we need to think about
knowledge itself?
What does it mean to knowsomething in the age of AI?
Mark?

Speaker 1 (11:25):
MIRCHANDANI that is a profound thought to end on,
something for all of us to thinkabout.
Thank you so much for.
Advertise With Us

Popular Podcasts

Stuff You Should Know
24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.