All Episodes

October 2, 2025 19 mins

Let’s strip away the hype and make AI understandable, useful, and human.

Google Research VP Maya Kulycky explains why the human brain remains unmatched (And why that’s good news!) and offers practical guidance for using AI as a collaborator, not a crutch. Google DeepMind COO Lila Ibrahim takes us inside different projects that expand what’s possible with AI in anthropology (Project Aeneas) and molecular biology (AlphaFold). 

Responsibility runs through every story here as both Maya and Lila emphasize safety reviews, partnerships with domain experts, and community voices shaping how tools land in classrooms, labs, and homes. We also talk about supporting different learners, like how AI can patiently explore rabbit holes for one student and help another organize ideas and communicate with confidence. 

By understanding what’s behind the AI curtain, (Statistics, not magic.) we learn how to set smart guardrails, design better prompts, and turn AI’s fluency into real learning and better decisions. 

If you’re ready to replace AI mystery with mastery, press play on this episode! 



aiEDU: The AI Education Project

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_01 (00:00):
We've always been on a quest for knowledge as people.
And we've always wanted tounderstand ourselves better.
How we think, how we makedecisions, how we improve our
world.
And I think the origins of AIare part of that quest.

SPEAKER_04 (00:22):
Today we're going back to the beginning to try to
understand just what AI is andhow it works.
It's AI 101.
Welcome back to a new episode ofRaising Kids in the Age of AI,
the podcast from AI EDU Studiosin collaboration with Google.
I'm Dr.
Lisa Pressman, developmentalpsychologist and host of the

(00:42):
podcast Raising Good Humans.

SPEAKER_02 (00:44):
I'm Alex Katron, founder and CEO of AI EDU, a
nonprofit that helps studentsget ready to live, work, and
thrive in a world where AI iseverywhere.
Today in the pod, we have two AIexperts and parents with us to
help.
Maya Kalicki, VP of Strategy andOperations for Google Research,
and Chief Operating Officer ofGoogle Deep Mind, Lila Ibrahim.
They're going to help us see theshape of AI, from its origins to

(01:06):
its future and best practices,and going over some of the
different types of tools thatare available.

SPEAKER_04 (01:12):
This is just what I need, honestly.
But before we hear from Maya andLila, I have a few questions
that I think are so basic, butwe never really define them.
And I like to have thingsdefined.
So I'm going to start with justwhat actually do we mean when we
say AI?

SPEAKER_02 (01:27):
So AI can mean a lot of different things.
It's a relatively broad term.
In its simplest form, it'stechnology that allows machines
and computer systems to performtasks that typically require
human intelligence, things likelearning, problem solving,
decision making, perception, andunderstanding language.

SPEAKER_04 (01:45):
Okay.
And these are terms I hear, butI have no idea what they are.
LLM.

SPEAKER_02 (01:50):
So an LLM is a large language model.
And it's okay.
So basically it's a computerprogram.
It's trained on massive amountsof data.
And we're talking billions andbillions of pages from books,
articles, websites, forums.
And you take that data and youprocess it with literally the
most powerful supercomputers inthe world.
You add a little bit ofrefinement from human feedback

(02:11):
and you get this incredibletool.
We call it a model.
And it can generate all kinds ofcontent and engage with you in
natural conversational English.

SPEAKER_04 (02:19):
Got it.
That makes sense.
But then what's generative AI?

SPEAKER_02 (02:23):
Generative AI is a little bit broader.
It includes large languagemodels, but it also includes
tools that can create art, uhimages, video from text.
Uh-huh.
Because you can kind of use theminterchangeably.
If someone's talking aboutgenerative AI or large language
models, more or less they'retalking about the same broad set
of capabilities.

SPEAKER_04 (02:44):
Okay.
So now I feel ready to dive in.
Who are our AI 101 instructors?

SPEAKER_02 (02:50):
So lucky for us, we have two folks who sit close
enough to the hard science tosee everything up close, but
they're also coming at this onthe side of the humanities and
can help translate some of thiscomplexity to non-experts who
are trying to peel back thecurtain.
So we're going to hear fromDeepMinds Lila Ibrahim later on,
but first we're going to hearfrom Maya Kalicki.

SPEAKER_01 (03:09):
My name is Maya Kalicki.
I lead strategy operations andoutreach for Google research.

SPEAKER_02 (03:13):
Part of Maya's role is working with research labs to
help shape the next iteration oftechnological breakthroughs that
we hope are going to change theworld.
Maya also had a hand recently inintroducing Google AI
Essentials, a course which wasdesigned to help answer some of
the most basic questions thatpeople have about AI.

SPEAKER_01 (03:29):
When we think about AI and its origins, it's really
tied to the human brain and ourability to try to understand how
do we think?
How does the human brain work?
And to replicate some of theactivity that we do as people in
order to assist us.
Thinking, understanding,learning.

(03:50):
These are different cognitivetasks that are part of AI.
What AI can do is remarkable.
It can't do what we do aspeople, and it is not as
efficient as the human brain.
The human brain is a remarkable,beautiful organ that runs off
very little energy.
It is just that's why peoplehave wanted to find something

(04:14):
similar to it, you know, sincethe dawn of time.
It is exceptional and unique.
But we've created some toolsthat have some aspects that are
shared with the human brain, andwe are very, very grateful for
it.
They allow us to shareinformation and to understand
things as people use using ourbrains much better than we did

(04:34):
in the past.
And to get to mutualunderstandings more quickly,
too.
I always encourage people to goand experiment with large
language models and see how theycan be used in their lives to
make their lives easier.
It certainly makes my lifeeasier.
I'm a parent and I have twolovely kids, ages 13 and 15, who

(04:55):
are who would be cringing rightnow if they knew that I was
mentioning them.
I'll tell you some of thepersonal miseries that AI has
eliminated on from a very, verybasic point of view.
So if my kids, for instance,would ask me for a jacket, I
want a jacket, I want a redjacket, okay?
I'm gonna search for a redjacket and I have to type in,
I'm looking for a red jacket.

(05:15):
And then, you know, theconversation goes like, well,
what kind of jacket?
Oh, it's a puffer jacket, okay.
And that's not that color red,it's a different color red.
Okay.
I mean, we're 15 minutes in andwe're still looking for this
jacket.
AI has allowed us to do thingsthat like circle to search,
right?
I feel like, or I can give it apicture of someone wearing the
jacket that my kids are askingfor, and then a few seconds

(05:38):
later know exactly where thatjacket is, how where I can buy
it, how much it costs, is it onsale, things like that.
So I'm thankful for all thattime not wasted at the same
time.
Please, please do not take theoutput of a large language model
and think you're just gonna takethat and roll.
No, it's meant to be acollaborative tool.

(06:00):
It's meant to be something thathelps you get to a certain
point.
You look at it and you say, Oh,okay, this is a great place to
start, but I'm gonna improve it,or oh, this piece is not what I
meant.
I need to change this, I need tochange that.
It's something that you need tocheck the work of.
It's not something that is meantto be a freestanding tool

(06:20):
without us as people.

SPEAKER_04 (06:22):
It's not just Maya here saying this, but in the
previous episode, we heardsimilar ideas.
You do have to check your work.

SPEAKER_02 (06:28):
Yeah, this is what we mean when we talk about the
human in the loop.
It's this idea that, you know,AI, if we really want it to be
an extension of us and not areplacement, we have to
understand our role in thatequation and brings us back to
why it's so important tounderstand AI.
Um, because if we understand it,then we have the building blocks
to actually begin to harness itand use it, you know, for our

(06:51):
purposes and desires as opposedto what the AI thinks that we
want.

SPEAKER_01 (06:56):
I talk to my kids about AI and about what it is
and what it isn't and how to useit and how not to, um, and the
guardrails that I want them tohave around the technology.
I think the most important thingfor kids to have is um their
humanity.
Hold on to the exceptionalismand the beauty of what we are as

(07:17):
people and what we can createand use these tools to do it.
Be thoughtful about it.
Pursue your dreams.
But they're your dreams, right?
The things that you want toaccomplish in your lives as
kids, the things as parents thatyou're envisioning for your kids
to be able to do.
You just have another tool inyour toolbox to do that.

(07:40):
Use it with caution.
Make sure kids are educatedabout what AI is and make sure
that they understand thelimitations of AI, and then let
them walk into their ownexceptionalism as people.

SPEAKER_04 (07:55):
I love what Maya said about the human brain.
It's so cool to hear someone intechnology acknowledge that this
is irreplicable.
And so I think we need to keepon having these conversations
and modeling how beautiful thishuman brain is and hold on to

(08:15):
our humanity and emphasize thatwith our kids.
That this is not a freestandingtool.

SPEAKER_02 (08:20):
Yeah, it's not intuitive.
It's simultaneously one of thecoolest aspects of language
models, the fact that you getthis, you know, very human-like
experience.
You know, you're essentially, itfeels like you're having a
conversation.
And it's actually hard to getpast that into what's really an
abstraction of almost like acompression algorithm for the
entire internet.
So to put it more basically, um,you know, the AI is guessing how

(08:45):
a human might answer yourquestion or how an expert might
answer your question.
Um, but it's actually a lot ofstatistics that's happening in
the background, statistics thatwe don't even fully understand.
It's it's beguiling, even to theexperts.
And so, you know, one of thebiggest challenges that we have
as educators and as parents is,you know, how do we kind of
pierce through some of that?

(09:05):
You know, if you're feelingconfused, you're in good
company.
There was actually a study acouple of years ago, and it
basically listed a bunch ofeveryday uh instances where AI
is, where we know AI is used,things like wearable fitness
trackers, chatbots, um, productrecommendations, like spam
identification, you know, musicrecommendations.
And only a third of respondentswere able to identify that AI is

(09:28):
used in all of those examples.
So this is a relativelywidespread problem that that you
know, people are kind of flyingblind, as it were.

SPEAKER_04 (09:37):
Yeah.

SPEAKER_02 (09:37):
Um, and what it means is that there's still so
many of us that are using AIregularly and don't even realize
that it's AI.
Um, so if there's any, ifthere's just one thing that you
can leave this podcast with isthis curiosity about, you know,
when you're using technology uhand it feels like magic, you
know, kind of like really askingyourself what might actually be

(09:58):
happening behind the scenes.
And once you understand thatthis isn't magic, but actually
just some fancy statisticsthat's happening in the
background, it leads you to thenext question, which is, you
know, what is my role in usingthis technology so that I'm not,
you know, um purely beholden to,you know, what this is the
statistics think uh I should bedoing or what the answer should

(10:20):
be.
Does that make sense?

SPEAKER_04 (10:22):
Yeah.
I actually, dare I say, I feellike I have my bearings and I
want to go a little deeper andunderstand the bigger potential
of AI, but I also want tounderstand a little bit more
about what kind of work is goinginto building these models
safely and responsibly.
I think that's kind of on a lotof our minds.

(10:42):
So we're really lucky to hearfrom our next guest.

SPEAKER_05 (10:46):
I'm Lila Ibrahim.
I'm chief operating officer ofGoogle DeepMind.
We do a lot of work around theresearch of AI and bringing that
into the world.
And part of that will be throughlarge language models, through
products like Gemini, but wealso use AI to apply to some of
the world's most challengingscientific problems.

SPEAKER_02 (11:08):
Lila was brought on as DeepMind's first chief
operating officer in 2018.
And she has spent more thanthree decades in the tech
industry.
Today, in addition to overseeingtheir day-to-day operations,
Lila's work focuses on impactand responsible innovation.
And one of the things we askedher about is how Google
DeepMind's advanced AI is makingexciting new discoveries
possible, including changing theway we understand the past.

SPEAKER_05 (11:30):
There's a really cool project we've done called
Project Enias.
This takes ancient text and ithelps us fill in the gaps.
Imagine a stone is broken andwe're only have part of that
text there.
Where did it come from?
What context might there be?
Help us translate it.
So imagine now historians beingable to use artificial

(11:52):
intelligence to help us unlockunderstanding of the past,
something that we thought mightnot have been possible.
Another area that we've appliedAI is to something called
protein folding.
Our advanced AI system calledAlpha Fold was a breakthrough
discovery.
Actually, it got the Nobel Prizelast year.
First time ever in history thatthe Nobel Prize has been awarded

(12:15):
for the application of AI.
And it helps you predict the 3Dstructure of a protein and
interaction with smallmolecules.
But why is that important?
Imagine being able to understanddiseases better and then come up
with better therapeutics orbreak down industrial waste or
figure out how to grow somecrops better resistant to

(12:37):
disease.
This is all possible now withthe help of our advanced AI
system called AlphaFold.
What's really beentransformational about this is
we have over 2.5 millionresearchers in 190 countries
using this technology, as simpleas a Google map search, using
this database for free toadvance scientific discovery and

(12:59):
create a better future for usall.
When Google DeepMind wasfounded, at the center of
everything was this belief thatwe had to do this responsibly.
It's because we thought thiscould be such transformational
technology.
And transformational technologyrequires exceptional care.
That means everything from ourearly stages of research to how
we do the development, what ourgovernance models are

(13:22):
internally, and even how wedeploy it.
So just to give some specifics,like our advanced AI system,
Alpha Fold, before we releasedit, we worked with uh dozens of
researchers to say, is this safeto release?
How do we need to think aboutit?
And it actually led to apartnership with the European
Bioinformatics Lab that actuallyhad this network of researchers

(13:44):
and could help us do this, birththis technology in a responsible
way into the world.
And that's a general philosophythat we have of like, how do we
bring in those voices and thosecommunities into the process as
we're developing?
Whether they're music artists orwhether they're teachers,
experts in learning science, allof that gets put into how we

(14:05):
develop it.
We like to think of it as likeAI shouldn't happen to us.
It should happen with us.

SPEAKER_02 (14:13):
We can't say this enough.
We can't just be bystanders inthe AI story.
By understanding the technology,we can and should be an
essential part of that story.

SPEAKER_04 (14:21):
In her personal life, Lila's found AI to be an
incredibly useful tool fordistilling essential
information.

SPEAKER_05 (14:27):
I've actually taken notebook LM and uploaded housing
manuals of like how to use mydishwasher, my washing machine,
the coffee machine.
And so now whenever I have aproblem, I just go and I say,
okay, using all of thisinformation, tell me why this
light has gone off and what do Ineed to do about it.
You can imagine we have a lot ofconversations in my household

(14:49):
about AI, and every family isdifferent.
But what I've personally foundis actually being very open and
having conversations around thetechnology as it evolves has
been really important.
I have twins and they both usethe technology very differently.
One is a traditional learner,and so what she's found is AI
doesn't judge the questions,will willingly go through rabbit

(15:11):
holes of learning rather thangive just a single answer and
textbook of here's what we needto learn.
My other daughter is dyslexic,and so what she's found is that
she has all these ideas thatsometimes she has a hard time
expressing.
And what she's able to do withAI is to help organize her
thoughts and communicate it in away in which other people will

(15:34):
have an easier time tounderstand what she's trying to
say.
And so it's completely unlockedher potential in a way that has
also given her a big boost ofconfidence where she may have
struggled uh in the past.

SPEAKER_02 (15:49):
All right, Aliza, we've talked to two very
impressive members of the Googleteam that are working on this.
And you know, one thing thatstands out to me is they're
they're also learning sort of asthey go.
Like they they're not presentingas if they have all the answers.
And and a lot of thedescriptions and examples
they've given were reallyexamples of where they're
experimenting.

(16:09):
But it all really comes down to,again, this idea of AI happens
with humans, not to humans.
And you know, collaboration andreally harnessing it is at the
center of it.

SPEAKER_04 (16:19):
Yeah, I mean, it's definitely more transparent.
It's not yet translucent for me.
But one of the things that I'malso just kind of wondering is
like when I think aboutresearch, like we have so many
questions all the time.
Like, can I input data into AIand tell them what kind of
analysis to do?

(16:39):
Like these kinds of things kindof excite me because that saves
so much time.
So I'm definitely curious aboutthat.
I've certainly never used AI inthis way, but I'm kind of
growing alongside this podcastbecause I'm learning about how I
might use it in the future.

SPEAKER_02 (16:57):
So if you still have questions, you want to learn
more, you can check out some ofour resources at aiedu.org.
And you can also just spin upGemini and start, you know,
asking some questions yourself,including some of the ones that
were asked here.
My guess is that Gemini is goingto have some really compelling
and solid answers.

SPEAKER_04 (17:18):
Thank you so much for listening.
Join us again next week when wetake a look at an AI-enhanced
classroom from a teacher'sperspective and what it means
for your kids' education.
We'll hear from New York Citypublic school teacher Shira
Mauskowitz.

SPEAKER_03 (17:32):
This teacher came to me with something that was not a
technical issue per se.
And I presented him with atechnology solution, but it
actually addressed his challengeand more.
The behavior was better, theengagement was better, his
scores were actually higher.

SPEAKER_02 (17:47):
You'll also hear from Google's Jenny McGuera.
Jenny's the global head ofeducation impact at Google and a
former classroom teacherherself.

SPEAKER_00 (17:54):
It's giving me that feedback.
It's almost like I'm the headcoach of a Big Ten football
team, and I've got like all theother assistant coaches in my
ear telling me, like, hey, youneed to go over here.
Hey, let's, you know, pause thislesson and regroup because
they're they're not getting it.
We need a timeout.
So that's really magical.

SPEAKER_04 (18:12):
Together, they'll tell us what the classroom looks
like when it's supported with AItools and the ways it can help
support different learners.

SPEAKER_02 (18:19):
Find out where AI will take us and future
generations next on raising kidsin the age of AI.
Until then, don't forget tofollow the podcast on Spotify,
Apple Podcasts, YouTube, orwherever you listen so you don't
miss an episode.

SPEAKER_04 (18:32):
And we want to hear from you.
Take a minute to leave us arating and review on your
podcast player of choice.
Your feedback is important tous.
Raising kids in the age of AI isa podcast by AIEDU in
collaboration with Google.
It's produced by Kaleidoscope.
For Kaleidoscope, the executiveproducers are Kate Osborne and

(18:53):
Lizzie Jacobs.
Our lead producer is MollySosha, with production
assistance from Irene Bantiguay,with additional production from
Louisa Tucker.
Our video editor is IlyaMagazanen, and our theme song
and music were composed by KyleMurdoch, who also mixed the
episode for us.
See you next time.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

CrimeLess: Hillbilly Heist

CrimeLess: Hillbilly Heist

It’s 1996 in rural North Carolina, and an oddball crew makes history when they pull off America’s third largest cash heist. But it’s all downhill from there. Join host Johnny Knoxville as he unspools a wild and woolly tale about a group of regular ‘ol folks who risked it all for a chance at a better life. CrimeLess: Hillbilly Heist answers the question: what would you do with 17.3 million dollars? The answer includes diamond rings, mansions, velvet Elvis paintings, plus a run for the border, murder-for-hire-plots, and FBI busts.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.