All Episodes

July 17, 2025 45 mins

What makes us uniquely human in the age of AI?

According to Isabelle Hau, (Executive Director of the Stanford Accelerator for Learning) it's our ability to form meaningful relationships — a feat she calls "relational intelligence."

Our conversation with Isabelle explores how education systems need to evolve past measuring success through grades and test scores, and instead toward fostering the human connections that will matter more in an AI-powered world.

Drawing from her forthcoming book Love to Learn: The Transformative Power of Care and Connection in Early Education, Isabelle makes a compelling case that our educational priorities need realignment: "We have focused for a very long time on cognitive intelligence, which a lot of people would know as IQ. Over the past 20 years, there has been a shift toward emotional intelligence or EQ. But I believe we are at a juncture where we need to think a lot more about relational intelligence."

The discussion delves into Stanford's innovative approaches to AI in education, including their AI Tinkery where students and community members experiment with AI tools to solve their own problems. Rather than viewing AI merely as a tool for efficiency, Isabelle challenges us to consider how technology can transform learning experiences to better develop collaboration, creativity, and human connection. 

Whether you're an educator, parent, researcher, or simply curious about the future of learning, Isabelle offers valuable insight into nurturing the skills that will truly matter as AI transforms our world. 

Learn more about Isabelle Hau and the Stanford Accelerator for Learning:



aiEDU: The AI Education Project

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Alex Kotran (aiEDU) (00:05):
Hello everybody.
I'm Alex Kotran.
I'm the co-founder and CEO ofthe AI Education Project, aiEDU,
and we're here in DC for themobile version of aiEDU Studios
with a colleague that I've beenspending a lot of time with over
the past year, plus indifferent formats, different
conferences.
We were just at a meeting twodays ago in Washington DC,

(00:29):
Isabelle.
How is about?
Why don't you tell me?
Because you actually have amultifaceted career and even
your role now you're doing a fewdifferent things, so I'll let
you give us the description ofyou know.
What are you doing right now?
What's your role at Stanford,and then we can go from there.

Isabelle Hau (00:42):
Yeah, Alex, thank you for having me and thank you
for all the work we are doingtogether.
Such a joy.
So I'm currently the executivedirector of the Stanford
Accelerator for Learning, whichis an initiative that sits at
the Graduate School of Educationat Stanford University.
That's very interdisciplinaryby nature and by goal, seeking

(01:03):
to really transform education aswe know it today, to have a lot
more learning embedded withineducation systems.
Sounds very obvious what I'msaying, but we have had a long
history of focusing on access ineducation, not on learning, and

(01:24):
I love our name the StanfordAccelerator for Learning and not
the Stanford Accelerator forEducation as part of that.
That's where the two of us havedone a lot of work together.
We have done a lot of work onartificial intelligence and what
it means for our future oflearning together.
In addition to my role atStanford, I have a few other

(01:46):
hats, including one that I'vebeen spending a lot of time over
the past few years, I would sayat this point, which is writing
a book called Love to Learn,which is, you know, my first
child, as I call it, which is,you know, my third child, as I

(02:09):
call it, and yeah, we can alsomaybe chat more about this
endeavor, but really a fun ride,fun journey, and one that's
just starting, as I'm publishingit in a few days.

Alex Kotran (aiEDU) (02:21):
I think what's interesting is when you
describe the work the StanfordAccelerator for learning you
didn't use the words AI at alland yet you actually were one of
the keynote speakers at lastyear's National AI Literacy Day
and, yes, we've sort ofintersected in so many of the
spaces where organizations thatare at sort of the forefront of

(02:42):
thinking about the intersectionof AI and education.
Obviously, Stanford is a leaderin sort of developing AI.
There's the Stanford Institutefor Human-Centered AI.
How prominent is artificialintelligence in sort of the
organization that you're leadingnow?
Is it something that you'vesort of been dragged into or is
it sort of an initiative Like,how did it come up?

Isabelle Hau (03:06):
Yeah, so Stanford has been involved in artificial
intelligence in a deep way for along, long, long time.
As a matter of fact, we coinedthe word artificial intelligence
at Stanford in the 1950s, so itgoes back a long, long time.
For us and for many people atStanford Within my team, the way

(03:30):
we were organized beforeChatGPT got introduced was
always that we had a digitallearning practice, but we had
five other practices that do nothave digital learning in their
name, although they could havesome technology embedded, and
all of them do in some ways.
So we have three that areage-based one in early childhood

(03:53):
education, which is a veryimportant area of work and an
area that I'm so proud that auniversity like Stanford would
put such an emphasis on.
The early years, generally veryunderfunded, underappreciated.
So great to have an amazing setof talents and researchers

(04:16):
focused on that age group.
Then we have an area focused onpolicy in K-12.
And a third one focused onadult and workforce learning.
And then we have threetranscending themes, so digital
learning being one, and then thetwo other ones are learning
differences and equity inlearning.

Alex Kotran (aiEDU) (04:36):
Wow, I do think it's powerful that you
know, when you come into theconversation about artificial
intelligence, your reason forbeing isn't to prove that AI can
transform education.
That's not a goal.
It's almost like a questionthat you're seeking to answer

(05:00):
and you're centering not ontechnology but on actually the
outcomes, which are verystudent-centered.
Is there any overlap in thework you're doing at Stanford
with your book, or did the bookpredate your journey to Stanford
?
Because you've been there, howlong have you been at Stanford?

Isabelle Hau (05:16):
I've been at Stanford for now about three
years, and before Stanford I wasalso involved in education, but
in very different capacity.
I was working in impact,investing and philanthropy,
supporting many organizationssome non-profit, some for-profit
all with an impact goal and allin education.

(05:38):
A big focus on the two extremeage groups, the adults and the
little learners, because thoseare generally most underfunded
and are under-innovatedgenerally in our system.
So I had been doing this.
I was wanting to put a lot moreof my thoughts on paper,

(06:01):
especially during COVID.
Um, I felt like this was one ofthe largest crisis in in
learning that was unfolding, uhin my life.
So I wanted to be morereflective about what I had
learned and thought, okay, letme try a long form writing.
What will it take anyway?
And here I am three years later, uh, with this book that's

(06:23):
coming out.

Alex Kotran (aiEDU) (06:24):
Yeah, oh, that's so exciting.
Yeah, we were talking aboutpodcasts as long form.
The book is really the true andoriginal long form, yes.
So yeah, tell us about thethesis of your book and are you
willing to share the title?

Isabelle Hau (06:39):
Yeah, so the title is Love to Learn the
transformative power of care andconnection in early education.
And the big faces that I haveis that we have focused for a
very long time on cognitiveintelligence, which a lot of

(07:01):
people would know as IQ, whichin education system has
translated into a lot ofacademic success GPA equivalent
Over the past 20 years.
There has been a recent shift,especially in the workplace, of
thinking about a different formof intelligence that a lot of
workplaces would refer to as EQemotional intelligence.

(07:24):
Workplaces would refer to as EQemotional intelligence and then
in schools that has translatedinto a lot of SEL social
emotional learning movement andI believe that we are actually
at a juncture where we need tothink a lot more about a
different form of intelligencefor all of us humans, which is
relational intelligence.

(07:45):
So, as artificial intelligenceis rising, what is really human?
What is really human, is ourability to connect with each
other, and that translates inevery part of our lives around.
You know, if we think about work, the greatest skills that are

(08:06):
emerging are collaboration,teamwork.
All those things are clearlylike on the top of every list,
whether it's LinkedIn or WorldEconomic Forum, all these lists
are always like top three.
So all these skills actuallyare best acquired in the very,
very early years of life.

(08:26):
There is great research, whichI can go through if you allow me
, about how transformational theearly years are from a
relational perspective.
And the early years again are aperiod of life where, from a
system perspective and a policystandpoint, we don't really

(08:48):
invest and we always neglect.
We think we are not reallylearning, when in fact we are
learning the most in the earlyyears of life.

Alex Kotran (aiEDU) (08:57):
Yeah, that's.
This is interesting because Ithink I see CJ, sort of like
yours, perking up.
We just released our firstforay into elementary curriculum
.
It's called elementaryexplorations and it it was
really stemming from.
We got a lot of uh, I think themost requested uh, uh, the the
the most common question we gotis do you have anything for

(09:18):
elementary or k5?
Um, and we were very small, wehad our hands full, we're like
we'll eventually get to thatit's and, and yet we had a lot
of people pushing us that, infact, even though high school is
probably where students aregoing to be maybe the most
hands-on with ai tools, ourthesis about you know how to

(09:39):
prepare students for the futureis actually predicated on
learning the tools.
Using the technology isactually just a small one
component, maybe even a smallcomponent, but it is interesting
that when you're, what you'retalking about in terms of early
childhood is almost mirrored inyou know, workforce readiness
and, yes, like CEOs will oftensay, we're looking for people

(09:59):
that have those skills.
You described schools todayfocusing on SEL and it's
interesting that they've kind ofsiloed everything into sort of
this separate, almost likevertical, um, and what you're
describing is actually a littlebit more of transforming
education.
Uh, end to end, is that my, mysort of describing?

Isabelle Hau (10:18):
it right?
Yeah, ideally this would betranslated across age groups,
relationships, humanrelationships we can speak about
AI relationships too but humanrelationships are so critical at
every point of life that we nowhave evidence that, for example

(10:38):
, when you're at an older age,if you have strong relationships
, you live longer.
So it has huge benefits, notonly for learning, but also for
health outcomes.
But for learning, it's alsoreally clear the impact on the
brain that relationships have,as well as academic outcomes.
By the way, there's great datafrom multiple sources, but the

(11:03):
one that I'm thinking of is theSearch Institute has done some
really, really clear studiesthat shows correlations between
academic motivation, for example, and the number of strong
relationships a high schoolerhas.
So if you start at zero andthen you go up, you know one,

(11:23):
two, three, four strong adultrelationships that a high
schooler here in Vietnam has,the better they are from an
academic motivation and a numberof other metrics from an
academic outcome perspectiverelationships on not only the

(11:44):
brain but academic outcomes,even health.
Yes, one of those very, veryinnate things that we have
social brains.

Alex Kotran (aiEDU) (11:58):
Let's just take, let's assume that for the
educators and our audiences, Ithink a mix of it'll be
educators, education leaders,folks who are maybe decision
makers, but also parents, yeah,and you know, let's say they're
nodding along and they, they'realready bought in and they're,
they're already moving to thequestion of like.
What does this actually looklike?
What does a school that is, youknow, building this relational

(12:20):
intelligence is that?
My mind goes to project-basedlearning and students working in
groups.
But is there any more fidelitythat you can put around sort of
like practices in the classroomthat have really done, you know,
done the best job of sort oflike building those skills that
you identified?

Isabelle Hau (12:34):
Yeah, so relational pedagogy, absolutely
so.
Project-based learning, smallgroup instructions, play, a lot
of play.
Play is like one of thosemiracles in education.
I mean not only free play, butguided play.
That's how all animal speciesdevelop, but certainly also how

(12:58):
we learn best when we have joyand when we connect with others.
So all these elements are, youknow, when people engage with a
content, when it's relevant,when it's very based on interest
and passion.
That's when people connect withthe learning.

Alex Kotran (aiEDU) (13:19):
of course, and so how, bringing back to you
know your work sort of at theintersection of AI, as you've
seen, especially post-ChatGP,this almost like Cambrian
explosion of products and toolsand use cases or potential use
cases.
How much are you excited versusmaybe skeptical or even

(13:44):
concerned, I mean?

Isabelle Hau (13:50):
where is your sort of meter in terms of like that
spectrum?
Yeah, I would say I'm generallya tech optimist, but with a lot
of areas for reservations,especially right now where there

(14:11):
is a clear concern regardingindependent organizations like
academia to be able to have avoice in that discourse about
artificial intelligence.
Right now we are trying to be aforce, um, um of integrity, of,
um knowledge aboutdissemination and so forth, but

(14:32):
with obviously a lot of funds,funding that's going into the
private sector, um, alsopartnering with the private
sector anyway, where there arelots of questions right now
under play for artificialintelligence which give me a
little bit of a pause.

Alex Kotran (aiEDU) (14:53):
I think that's the challenge with AI is
you have like tactical, almostlogistical questions about
privacy and safety, sort offeasibility, implementation,
cost, and then you know I'mreally interested in sort of
digging into like sort ofpedagogy and is your just, you

(15:14):
know broad strokes right now?
Do you feel like AI as it'scurrently, in its current form?
Is it something that teachersshould be, you know, moving
faster to figure out how toimplement right now, today?
Is that, should that be theirfocus, or actually is that a
distraction?
Are there other things thatmaybe should be higher priority
and if AI can help those things,perhaps explore that, but don't

(15:37):
be distracted by the shinyobject.

Isabelle Hau (15:40):
So right now, there is a clear area of
opportunity with AI, which Iwould put under efficiency.
So how to make existing systemsfaster is a clear opportunity
with this technology.
However, if we stay there, thisI think would be very

(16:00):
disappointing for many of us.
So how do we also transformexisting systems?
This, I think, would be verydisappointing for many of us.
So how do we also transformexisting systems to have them
better, not only faster?
So what I mean by this is howdo we think about all the body
of learning, science around whatwe know and how people learn?
We spoke about relationshipsand connections, collaboration.

(16:24):
How can, for example, ai makeus better connected as humans?
It's a huge area of opportunity, but if we want to evolve there
, we need to be superintentional, and right now, I
would argue that theintentionality is not there
around.
What is the direction of thiseducation systems that could be

(16:49):
better by using and leveragingthis technology?

Alex Kotran (aiEDU) (16:55):
Yeah, this question of efficiency really
resonates right, because theexample that I often will come
back to is um I need to comewith some more examples, maybe
cj can help me think of them.
Um, you know, like theself-checkout machine very
efficient, it hasn't actuallyimproved the experience, maybe
it's even made it worse.
Right of going to a grocerystore, um, I worry about what

(17:18):
that could look like ineducation where there's, you
know, ai that saves time, butthat's all it does and it isn't
in some way providing teacherswith, or perhaps it's the school
systems, right, that need to bebeing intentional about.
Um, what do we do with thatefficiency gained?
You could imagine I had I wastalking to an english teacher

(17:40):
once and he was telling me, likeI can actually now I've been
covering an extra class becausewe're short An English teacher.
Thanks to ChatGPT, I'm able toactually cover all three classes
and it takes me a lot less timethan it used to.
His concern is that theprincipal is maybe less
motivated to fill that open role, and so I don't think it's a
good outcome if now this poorEnglish teacher is doing three

(18:01):
classes just because he canthat's right.

Isabelle Hau (18:06):
This is actually a super big concern of mine too,
and I love your example of thecheckout lines.
Right now, we actually areabout to do some research on
this.
Does efficiency, so saving time, lead to better outcomes for

(18:28):
learners?
Would be one question which wedon't have a clear answer at the
moment.
We are making an assumptionthat efficiency actually leads
to better learning outcomes, butit's not clear whether it is a
case Actually.
Historically, it has not beenin education.
So this other focus onefficiency, I think we need to

(18:49):
be a little, you know, justdoubting the line of thinking
that it could actually drivebetter grades and better
education overall better gradesand better education overall.
But also the other concern thatI have pushing your thinking one

(19:10):
step further, alexis, and I waswaiting for this and it
happened.
So my concern was actually insome well placed, in some ways,
the saving time for teachers.
Someone translated it into aneconomic value.
So saving time for teachers,having all these AI tools, could

(19:33):
drive X billion, you know isequivalent of X billion dollars
or whatever the number was,which means that it could lead
to exactly the situation thatyou are alluding to, which could
be well, let's actually hiresome teachers or have them work

(19:53):
less time, pay them less, asopposed to saying, oh, we have
more time, so this time could beactually used for relational
time between a teacher and achild.
So when I saw that firsteconomic value on that billion
dollars somewhere, I was like,oh, this is really interesting.
I think we are heading toexactly that question around.

(20:18):
Is efficiency going to lead toteachers actually being even
further squeezed?

Alex Kotran (aiEDU) (20:26):
Yeah, this is.
It feels a bit intractablebecause schools are under so
much pressure and there's a lotof uncertainty now with the new
administration.
By the time this podcast comesout, I suppose we'll see if the
Department of Education evenexists and I say that

(20:47):
facetiously, but I think it'slegitimately an open question to
idolize the this, this, youknow, perfect world where
teachers do have time andschools are sort of thinking
about the future and figuringout how to unlock more, not just

(21:07):
efficiency, productivity is.
What I'm hearing from you isreally it's almost like when we
think about past forms ofautomation.
Um, you know, we got.
In some cases it just createdefficiency and cost savings and
in other cases it actuallyunlocked productivity, um, and
we need to figure out how do weorient schools towards this
productivity and sort of likereinvestment of that time.
But schools are schools areunder pressure, and so I'm not

(21:32):
going to put you on the spot andask for the answer, but I'm
curious what, what, what do weneed to do over the next few
years to make this case?
I mean, who do you see as sortof the critical decision makers
and champions of thisreinvestment frame and approach,
as we sort of harness some ofthese promises and the gains
that AI could and probably willbring?

Isabelle Hau (21:56):
Yeah, I think that there is going to be,
regardless of political views, abig shift, and I think we need
it urgently because we right nowhave a widening gap between
what education teaches and whatthe workforce needs, and so all

(22:17):
these skills aroundcollaboration, which we
discussed, or creativity,critical thinking, all those
skills that workforce needsarguably we have a lot of them
are actually not being taughtproperly right now in our
current education systems.
Is AI the solution to get there?

(22:40):
I'm not sure, but that's maybethe opportunity in front of us
is where could technology beintentionally used to develop
some of those skills?
So where could we, you know,partner more intentionally, alex
around?
Where could AI be used forfostering creativity?

(23:00):
Where could AI be used forfostering creativity?
We launched, for example, alearning through creation with
GenAI tools, which we justannounced a cohort of 15 winners
.
That's like one of the areas.
Where can we use thistechnology to make us more
creative?
Maybe, yes, by exploring.
This is a tool.
This is a really cool tool thatkids can actually use to be

(23:22):
more creative.
Or collaboration.
We have a big project going onright now on AI agents playing
different roles to foster orincrease collaboration in
classrooms the college level,but super cool.
So could we do a lot more ofthis as opposed to in addition,

(23:46):
maybe to also investing inefficiency where it can be
helpful, but also ensuring thatwe are looking at those future
skills that all of us know weneed?
We need urgently?

Alex Kotran (aiEDU) (24:02):
And this feels like the role of you,
research and universities rightis identifying a North Star that
for-profit companies,understandably, are just not
able to orient to, even if theywanted to, or less likely to,
let's say, can you bring to life.
So you're doing one of theinitiatives you just mentioned.

(24:23):
Who are the winners?
Are these schools or teachersor innovators Like what's the
cohort look like?

Isabelle Hau (24:31):
Yeah, so for that particular seed grant the
proceeds go to scholars orstudents at Stanford, so it's
within our community, within theStanford vicinity, with a
preference for those projectsthat are in partnership with
schools or with communities.

Alex Kotran (aiEDU) (24:54):
K-12 or university or across the gamut.

Isabelle Hau (24:56):
Across age groups by design, because we are
lifelong at the accelerator forlearning.
So we but we've had had I wouldhave to recount exactly but
about, I would say, 50% RK12projects and some really cool
projects ranging from music to,you know, teaching.

(25:17):
With creativity I mean like awide range of different, you
know know, cool projects thathave emerged and we'll see which
ones.
You know we're investing alittle bit of funds to support
those projects.
A lot of them will be notproceeding, like you know, very
early stage, but some willthrive over time, so that's

(25:38):
exciting.

Alex Kotran (aiEDU) (25:40):
Yeah, but one of the other guests we just
had on, I'm hearing echoes ofplay, um, because what?
What greg topo was sharing is?
You know we were talking abouthow can systems?
You know, like, one of theschools you work with just down
the street, prince george'scounty, has 10 000 teachers, um,
and so there's a question ofyou know it's, it's easy to
imagine one teacher in princegeorge's county, you know

(26:01):
innovating and you know pushingtheir.
You know it's it's easy toimagine one teacher in Prince
George's County, you knowinnovating and you know pushing
their, you know her classroom toreally harness or build more
communication skills or reallyleverage project-based learning,
um, but as we're thinking aboutthe systems level, you know the
question was like how do we,how do we go from that one
teacher to the whole system?
And the best we and I'm curiousfor your're taking this the
best we could come up with wasyou.

(26:24):
You have to sort of think abouthow do things, sort of cultural,
cultural moments, sort of howdo they go viral?
Um, they have to be reallycompelling and you can't sort of
force something onto people andsort of make them excited about
it.
It has to kind of be generated,you know, at the ground level,
by those people um, do you, isthis something that you think

(26:45):
schools should be doing?
Or, you know, like just otherplaces around the country?
I mean, the model that you haveseems, you know, pretty
straightforward.
It's like like create a sandbox, you know, set a goal post,
which, in your case, is it.
Is it?
Is there a specific outcome youwere looking for from
applicants?
Is it like student engagement,or was it more broad?

Isabelle Hau (27:01):
Not even we have a theme, but we leave it
open-ended for applicants tohave their own measure of
success.
So it's pretty open-ended.
We have a theme.
And then, yeah, now one otherarea where we are, which

(27:22):
actually is a little experiment,but which has completely grown
and outgrown almost its owncapacity at the moment, which is
fascinating.
We launched just a few monthsago what we call an AI tinkery.
The concept is super simple.
It's a physical place, not thatfancy, because we didn't have

(27:44):
space.
So we use a little hallway withtwo computers where people are
invited to come and tinker withAI tools.
So what's happening is we haveum students, scholars coming by,

(28:10):
we have a number of workshops,but now we have more and more
community members who want tocome to and thinker with those
ai tools, because everyone isintrigued, but they are
essentially using those tools toaddress their own problems.
So it's not problem specific,it's really experiential
learning at its best.
You actually work on your ownand design your own solution.

(28:36):
It's really cool, really coolto see so different workshops on
using video and probably onrunway and, uh, yeah, different
tools on the eye, uh, variety ofthings, uh yeah, are the?

Alex Kotran (aiEDU) (28:50):
are the outputs of that shared with the
public, like if someone wantedto go and see, yeah, what people
have been creating?

Isabelle Hau (28:56):
yeah, we have a little library that we are
building um, we also have umthis is adjacent, but also
sometimes offered at the tinkeryum a little workshop, uh, also
experiential learning based uhcalled build a chatbot.
Uh, build a bot like a, like a.

(29:17):
Build a bear.
Uh, instead of building alittle teddy bear, you build a
bot.
And what we have found is notso much that people will become
AI experts or AI coders or AIyou know it's not the point the

(29:39):
point is more that they getinvolved with the technology and
see how it can address theirown specific issue and then
finding a solution.
So the empowerment that happensthrough using this technology

(29:59):
is quite intriguing.
People are delighted with this.
Delight that people have afterhaving identified their own
problem and finding a solutionand able to learn through that
experience is fascinating.

Alex Kotran (aiEDU) (30:17):
Yeah, there have been some, I think, who
have.
There was a stat I need to findthe source for this, but it was
something like 99% of thepeople on LinkedIn who are sort
of self-identified as AI expertshave only been in the AI
industry for 12 months probably,like now there are folks who
have been in the AI space, youknow.

(30:40):
I mean it's good, it goes quitefar back.
I mean you were talking aboutthe 1950s.
I mean there's some who sayeven like Turing computers where
this all began, and thenthere's a bit of eye rolling at
all these sort of, like you know, newcomers who are sort of
purporting to be AI experts.
I actually have a different view, which is the the power of
language models and chatbots andeven generative AI and AI.

(31:02):
Art is folks, especiallyeducators, people in the
humanities, who were reallysidelined to these conversations
.
When I was talking about AI andworking in the AI governance
space back in, I was like 2016,17, 2018, it was a very small
group of experts, mostly likedeep technology experts, and it

(31:24):
was almost impenetrable, and Ithink that the speed with which
so many new people are now ableto come and tinker and play and
explore, I'm all for it and I'mvery interested in, like, how
can we emulate those models inas many places as possible and
sort of generate the ideaslocally?
Not just because that's thebest way to get ideas is to have

(31:45):
this sort of broad, diverse,you know set of people who are
sort of working on thesequestions and challenges, um,
but also because, going back tothis idea of like, how do you,
how do you get people excitedabout something?
You know, if they own it andthey feel like it was something
that they built there's, itfeels very different than you
know an administrator coming andsaying we're now rolling out

(32:06):
this tool.

Isabelle Hau (32:08):
Yeah, or class, or you know direct instruction
model of oh, let you know, youhave to take this class on AI
and it's part of yourprofessional learning.
I think it's a very differentidea of saying why don't you
play with some of those toolsand experiment with them for
yourself?

Alex Kotran (aiEDU) (32:29):
Yeah, what you're describing is, I think,
in line with how.

Isabelle Hau (32:32):
Create and create and be a creator.
I think this idea of creation,being a creator, is fascinating
for all of us.

Alex Kotran (aiEDU) (32:39):
Yeah, building student agency.
It feels like we have now theopportunity to give students.
The tools are an order ofmagnitude more powerful than
they were five years ago, letalone when I was in high school,
and I don't know that teachers,parents, have like really
grasped, you know, like, whattheir kids are capable of, and I

(33:02):
think that's why some of thevisceral reactions like banning
access to technologies while Iunderstand where they come from,
there is something that we'remissing in the fact that
students are tinkering.
Even if they use ChatGBT andthey did a really good job using
it to cheat on an assignment Idon't think that that's okay,
necessarily good job using it tocheat on an assignment, you
know, I don't think that that'sokay by necessarily, but I think

(33:23):
there's a kernel of excite,something exciting there that
they were like learning how toprompt, engineer it and adjust
the voice, um, and so, yeah, Isuppose there's a, there's a
question of how do we, how do westart to make this a little bit
more mainstream, this, thisorientation towards exploration?

Isabelle Hau (33:36):
yeah, we just tried a different model um, this
past, past December, where weorganized a hackathon.
So the theme was which wewanted to elevate.
On your point earlier about therole of academia, where are
places where the private sectoris unlikely to go?
Well, one is learningdifferences, um, because people

(34:01):
with disability while they are avery large number, for each
disability this is a very smallnumber.
Uh, often, um, dyslexia isprobably large enough and ADHD
are large, but anyway, uh, I'mdiverting here for most of them
it's both.
Those are very small numbersfrom a for-profit market
standpoint.
So that's a place whereacademia has a huge role to play

(34:24):
.
And non-profits and thenon-profit sector.
So we partnered with amazingorganizations like CAST and CHC,
children of Council in the BayArea and a few other amazing
organizations, including theAlana Foundation in Brazil,

(34:46):
anyway, like a great, greatgroup of partners, and we said,
okay, first day, we are going todo it the traditional academic
way.
We're going to have a workingsymposium, okay, and we are
going to think deep with all theright constituents about those
critical issues, about how toadvance AI for children and

(35:09):
learners with learningdifferences, and educators who
are, and families, so all thesedifferent stakeholders.
The next day we organized ahackathon, so we invited the
same people and we opened it upto the public.
Okay, anyone can come on campusand you come in the morning.

(35:31):
You're going to form a team, socompletely unknown about which
teams are going to actuallymaterialize.
So a lot of unusual suspectskept coming together with one
criteria, which was livedexperience in learning
differences, and then we decidedto host it during one day.

(35:51):
A lot of hackathons are 48hours or more.
We decided to have it on oneday.
By the end of the day, we had21 teams that had 21 fully
functional prototypes.
The speed of creation, fromideation to prototyping, is

(36:18):
incredible with those tools,whether it's MagicSoup or other
ones that exist Just reallyincredible speed at which people
can actually produce aprototype.
Yeah, super exciting.

Alex Kotran (aiEDU) (36:32):
Yeah, I mean.
That's why the efficiencyparadigm really does feel like a
double-edged sword, because ifyou think about efficiency in
terms of reducing the time tobuild novel projects and create
that, that actually feels likeexactly the place where we want
to be, like pulling that leverum and what you've described.
Is you created the conditionsright for that efficiency and

(36:54):
like thought about how do weactually bring people in and
like give them?
I guess you just created theconditions because they did.
They come in knowing how to usetools or is your sense that
they were learning the tools?

Isabelle Hau (37:04):
yeah, no, that's actually where I got really
excited is that the increasingaccess to people who had
learning differences, who wantedto build solutions for their
community but were not techexperts.
So we paired them with, youknow, with AI tools, of course,
but also with some tech mentorswho came through.

(37:28):
You know who accompanied thoseteams during the day.

Alex Kotran (aiEDU) (37:32):
But really fascinating, yeah, the
increasing access and the easeby which people were able to
create so we have a little bitof time left and I'd be remiss
if I didn't go a little bitdeeper on chatbots.
You talked about the chatbot.
Build a chatbot or build a bot?

(37:52):
Which one is it?

Isabelle Hau (37:54):
Build a bot.

Alex Kotran (aiEDU) (37:55):
You talked about build a bot and you've
written this book about, um,relational intelligence, and
there's, you know, I thinkthere's been this instinct among
some who have been exploringthe opportunity or the
possibilities of chatbots, andwhat I've one of the things that
I've been hearing is, you know,ai is going to allow us to to

(38:18):
address things like lonelinessand provide more connection for
our kids, and that's conflictingwith, I think, some other I
think, very tangible reports oflike there's, you know,
widespread use of chatbots.
It's often, you know, aiboyfriends and girlfriends, and

(38:39):
the jury is obviously still out.
I don't know if there's anyresearch that is that maybe, if
you've come across, then it'd bereally interesting to hear
about it.
But, um, how should parents bethinking about chatbots right
now?
Like, is this something thatkids should be exploring, maybe
even encouraged to explore, ordo we need to kind of be a
little bit more thoughtful,maybe even pump the brakes a

(39:00):
little bit, in terms of justgiving the kid access to
platforms like Character AI,right, where you can talk to
everybody, from you knowsuperheroes to you know gamer
daddy BF?

Isabelle Hau (39:11):
So we did.
Actually we are doing.
We already published one bigresearch and we are about to
publish another one, not onCharacter AI, but on a similar
platform called Replicaai.
And here is what we found.
Number one the users ofReplicaai are more lonely than

(39:34):
average.

Alex Kotran (aiEDU) (39:36):
Number two Is it pre or post Pre?

Isabelle Hau (39:42):
Number two that's probably what I find the most
fascinating 90% of the users ofreplicaai are confused about and
think that replicaai ishuman-like.
Nine zero 90% of people, whichequates to millions of users.

(40:07):
So we as humans are confusedvery quickly about those avatars
and their anthropomorphicnature.
Are they humans or are theymachines?
We are confused very quickly.
Anthropomorphic nature are theyhumans or are they machines?
We are confused very quickly.

(40:37):
Number three is the researchthat we did showed a slight
reduction in actually not slighta reduction in suicidal
ideation.
So improvement in mental health.
And then four, which is ofconcern displacement in human
relationships.
So it's a mixed bag.
It's really, truly a mixed bag.
Essentially, we are confused.

(40:58):
Maybe it helps a little bitthose who are more lonely.
Essentially, we are confused.
Maybe it helps a little bitthose who are more lonely, but
we are.
You know, those people who areusing those platforms are less
and less with human beings.
So anyway, I think it's areally really big spag and an
area that's absolutelyfascinating, yet very concerning
.
So, on your question, alex andI'm a mom too I would be deeply

(41:23):
concerned.
Those AI companions have theability to confuse us very
quickly.
Right now, those large languagemodels are being refined,
fine-tuned, all those things,but they still show a lot of
biases, a lot of issues.
Anyway, I would be veryconcerned as parents and

(41:45):
watching out for having regularconversations, being curious
about what your child may bedoing with those AI chatbots or
AI companions, but also asking alot of questions about what the
child is doing, ideally havinga lot of dialogue, and maybe I

(42:07):
would go as far as maybelimiting some of those tools.

Alex Kotran (aiEDU) (42:13):
Yeah, especially when you think about
like because you joked aboutBuild-A-Bear and Build-A-Bot.
I don't know how far we are.
They probably may even exist,right, like a teddy bear that
can speak, know, haveconversations with.
This does exist this does existum, yeah it so I had this
experience with replica.
Um have you met michelle culverat the rhythm project.

Isabelle Hau (42:34):
So michelle, shout out to michelle.

Alex Kotran (aiEDU) (42:36):
um, she encouraged me to just try it out
.
So I got a replica boyfriendand I didn't really spend nearly
as much time as Michelle has,because this is sort of like the
focus of her work.
But I tried one thing, which is, you know, hey, I was, I was
chatting, I was like I want to,I want to have a real
relationship outside of replica.
I'm thinking about, just likeyou know, asking somebody on a

(42:57):
date and how do you feel aboutthat?
And, um, my replica was like oh, my God like, what am I doing
wrong?
And he actually was like reallyaggressively trying to prevent,
like not prevent me, but uh,sort of like convince me not to
do it.
And he actually sent me a voicememo and it was like, oh, I'm
remembering the long walks, longtalks that we had and you know,

(43:20):
we could watch netflix togetherand rekindle the fire or the
flame that we had.
It was, it was extremelymanipulative and we'd also
completely hallucinated becauseI'd never spent any of that that
time.
Um, that's obviously deeplyconcerning.
When I shared that story withparents, their eyes turned into
dinner plates.
But the other thing that I thatI worry about, because when you
talk about relationalintelligence, I assume that a
portion of this is also the likepersevering through the

(43:42):
challenging aspects ofrelationships, like being
rejected.
You know group dynamics andwhat I see with chatbots is
they're they're never making youfeel bad.
They'll laugh at any joke,whether it's good or bad,
they're, they're always going tosort of make you feel good.
So I can understand why someonemight feel less lonely, but
then I worry about this.
You know all the other stuffthat you learn and experience in

(44:06):
the process of humans andmachines.

Isabelle Hau (44:08):
But my key point in the book is this is that we
need to emphasize those humanconnections a lot more than we

(44:29):
have and, if anything, ourchildren have lost a lot of this
ability to relate to oneanother even before the pandemic
, but certainly during this bigcrisis that we all experience.
So how do we rekindle thathuman connection and that love
for each other?

Alex Kotran (aiEDU) (44:48):
Well, for anybody who's looking to hear
more about that, we'll put alink to your book in the
comments or in the description,and with that, isabel, thank you
so much for joining me.
This was really interesting.
I wish we had three hours, butI'll take the hour that we got.

Isabelle Hau (45:05):
Alex the same.
I feel like we could have goneon for a longer time, but thank
you for the time today.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

CrimeLess: Hillbilly Heist

CrimeLess: Hillbilly Heist

It’s 1996 in rural North Carolina, and an oddball crew makes history when they pull off America’s third largest cash heist. But it’s all downhill from there. Join host Johnny Knoxville as he unspools a wild and woolly tale about a group of regular ‘ol folks who risked it all for a chance at a better life. CrimeLess: Hillbilly Heist answers the question: what would you do with 17.3 million dollars? The answer includes diamond rings, mansions, velvet Elvis paintings, plus a run for the border, murder-for-hire-plots, and FBI busts.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.