Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Alex Kotran (aiEDU) (00:05):
So yeah,
here we are.
Thank you for joining.
I'm Alex Katron.
I'm the co-founder and CEO ofAIEDU.
We're here with the pop-upAIEDU studios in Washington DC.
I'm here with Chika Yaga.
Chika, you have a lot ofdifferent roles and titles.
(00:25):
I'm going to let you introduceyourself and tell us more about
exactly what you're doing now.
I don't know how many hatsyou're wearing at this
particular moment but I knowyou're a juggler of hats With
that.
Chika, can you just tell us moreabout who you are, what are you
up to now and anything you wantto share about your journey to
getting there?
Chike Aguh (00:43):
Sure.
Currently, I serve as senioradvisor at the Project on
Workforce at Harvard University,where a research project
focused on these big questionsaround technology, the future of
work and how it impactseducation.
We're based across the HarvardBusiness School, the Harvard
Kennedy School of Government andthe Harvard Graduate School of
Education.
I also serve as a senioradvisor at Accenture it's one of
(01:03):
the biggest technology andservices firms in the world on
future of work and innovation.
And then, lastly, I'm a senioradvisor, actually, where we
first met at the McChrystalGroup, which really works with
organizations on, frankly, howthey manage their people,
particularly in times of changeand tumult, and then a bunch of
other roles where I get to spendtime Previous to that, from
(01:24):
2021, day one of the Bidenadministration to mid-2023,.
I served as the chiefinnovation officer at the US
Department of Labor, where Iworked for, at that time,
secretary Marty Walsh and then,after that, secretary Julie Hsu.
Would I think about how many?
Alex Kotran (aiEDU) (01:41):
Have there
been chief innovation officers?
Chike Aguh (01:42):
at DLO there was one
other who's a dear friend.
He was the first, I was thesecond.
Alex Kotran (aiEDU) (01:47):
And was
that in the Obama administration
?
Chike Aguh (01:49):
Yes, Okay, the role
was unfilled in the Trump
administration and then wasbrought back with me during the
Biden administration, and sowhen I think about this work and
kind of what brought me to it,I'll go back a little bit in
time.
I think I'll talk about kind ofwhy I care why, how I've
approached it and maybe somethings I think that I've learned
that inform how I come to thisdiscussion.
So in many ways I say my familyis a very classic immigrant
(02:11):
story to this country.
My family is from a village inNigeria that most Nigerians
themselves will never see andnever visit.
None of my grandparents all ofwhom were from the same exact
village, none of them went pastmiddle school.
My parents were born on streetsthat were unpaved then, that
are mostly unpaved now.
They had Peace Corps volunteersin their classrooms, and what
(02:32):
changed life for them was in themid 70s, early 80s.
They got a chance to come tothe United States of America for
higher education, public highereducation, and they had
actually meant to go back, andat that time there was a change
in government violent change ingovernment in the country, and
that was not an option, and sothey stayed here in the United
States of America and I am thefirst person in the entire
(02:53):
history of my family born inAmerica and I got the chance to
serve an American president, andso what I take from that is, in
many ways, the small example ofmy family is what happens when
America gets it economicallyright.
I think we also know by thedata we don't get it right with
enough people in enough places.
Enough of the time, and I'vetried to kind of devote myself
to that prospect and I think oneof the theories of action that
(03:16):
I've always had is that theright type of education, the
right type of economicopportunity, leads to
generational change in familiesand, I think, in the life of
countries.
And so that's kind of been mytheory of change throughout my
career.
I think I started off and I knowyou have you'll have teachers
and superintendents listening Istarted my career in the in the
new york city public schools,initially as a policymaker and
then as a teacher in that systemuh, god, almost now 20 years
(03:39):
ago.
And uh, from there, aftergraduate school, really went
through the public, private andsocial sectors trying to work on
these questions.
So in the private sector,places like the advisory board
company which works with highereducation with the McChrystal
Group really with GeneralMcChrystal as it came out of the
military trying to apply theselessons to large organizations
(03:59):
private mainly, but also nowstarting to be public sector and
nonprofits mainly, but also nowstarting to be public sector
and nonprofits.
In the nonprofit space was aCEO of a nonprofit called
Everyone On, focused on closingthe broadband gap, particularly
for families with kids.
We connected about half amillion people in 48 states to
the internet during the timethat I led there and they are
(04:19):
now getting towards 800,000people across the country.
Also went to a place called theEducation Design Lab where I
helped launch their communitycollege growth engine fund
during this, during the pandemic, really trying to work with
community colleges to helpupscale, rescale folks who
either lost their jobs duringCOVID or, frankly, were probably
underemployed before COVID andwe can talk about higher
education.
(04:40):
What's been the impact of that,of AI as well?
It's just kind of the economictimes that we've been in.
And then also in government,you know, served in the
Bloomberg administration in NewYork City.
I served under GovernorPatrick's Secretary of Education
when I was a graduate studentand then served in the Biden
administration, as I've talkedabout.
And so the other theory ofaction that I have is this
(05:00):
problem or any problem worthsolving?
And pick your problem fromeconomic competitiveness to
climate change, to nuclearnonproliferation.
No one person, organization orsector can solve the problem by
itself.
You need people and actorsacross sectors working in
concert to solve these bigproblems.
And so those are some of thelessons I think that I've
learned as I've approached thiswork, and I'm glad we're having
this conversation, because Idon't know if there's a more
(05:22):
important one right now.
I'm glad we're having thisconversation because I don't
know if there's a more importantone right now in terms of what
do we make of this technologyand, frankly, how do we use this
technology to solve some of theproblems that we have?
And then, frankly, how do wealso mitigate some of the
problems that this technologymay create, and do it all at the
same time.
And again, I you know I I metyou when, when, when you were
(05:45):
initially conceiving of thisproject.
And we have to start thisthinking.
Frankly, there is no time.
That's too early to have thisconversation with adults, and
definitely not with students,and so I'm glad to have this
conversation, particularly withyou.
Who's thinking about this?
Frankly, people at thebeginning of their journey.
Alex Kotran (aiEDU) (06:00):
Yeah, I
think it's because I actually
literally I remember exactlywhere we sat at the McChrystal
Group's office in AlexandriaAbout seven years ago almost.
That was OG right at thebeginning.
I remember there was almost animmediate resonance as I was
talking about the problemstatement, Because you had seen
(06:20):
this in an earlier wave ofinnovation around the internet.
I'm really interested in whatthe parallels might look like.
I think one of the biggestchallenges with artificial
intelligence is it's very hardfor people to grasp, to wrap
their heads around.
What does this actually mean?
How is this going to impact me?
Maybe they've used chat, GPT,but the amount of noise and even
(06:45):
just like DeepSeek being anexample of one of the flavors of
the day this dominating mediaheadlines, I think it's very
hard for someone who isextremely deep into the rabbit
hole to come up with someactionable steps.
Am I supposed to become an AIengineer?
Am I supposed to master promptengineering.
There's a lot of parents whomight be watching this, where
(07:07):
their kids might be 3, 5, 11,and they're thinking their first
job might be over a decade away.
It's very hard to predict whatmight come when you think back
to the work, sort of like theway the internet changed
education.
You were in New York Citypublic schools as presumably
they were starting to getwidespread access to computers.
(07:28):
I don't even know if studentseven all had access to computers
back then.
Are there any similarities tothe way that they were thinking
about the internet or computingin terms of how we're talking
about artificial intelligence?
Chike Aguh (07:41):
The way I think
about this question is I think
about there are lessons fromhistory that we can learn about
how technology has rolled outand affected education and
society overall.
Whenever you look at history,you have to basically always ask
the question what is the sameand what is different?
Because, uh, you know, notlooking at history at all is
always, uh, fraught, becausethere are lessons there.
(08:03):
Also, assuming that the futurelooks like the past is also
fraught, and so being able tolook at history and look at it
in nuance, and so this challengeof technology as a society and
normally I wouldn't go this deep, but we're having a long
conversation, so let's have thelong conversation.
Let's go all the way back tothe steam engine, all the way
(08:28):
back to the steam engine.
Let's go all the way back tothat telegraph, to telephony, to
electricity, to kind ofbeginning wide-scale automation
and probably the late, the late,uh, 1880s, uh, going into the
20th century.
Let's go into at that at theapplication of commercial
computing.
Initially, back in the 50s, 60s, 70s, computing was not a
consumer activity, it was doneby universities and corporations
.
And then, of course, in the 80s, with the Apple II at home, and
(08:53):
then, when I was 12, 13,.
You know initially Netscape,then AOL, and for those of you
who are too young, you don'tremember a bunch of the all
these other search engines,hotbot, sg's, so on and so forth
.
And so I say all this to saythis kind of widespread
(09:14):
application of technology, thatkind of strikes like a meteorite
and affects everything, hashappened to us repeatedly
throughout American and humanhistory.
Think about the impact of theprinting press way back when.
And then society has to figureout how do we use this?
How do we not Go back to theprinting press?
One of the big objections tothe printing press way back when
(09:36):
was that you have now madeinformation available to
everyone and society is notready for it.
The only people who should haveaccess to all this widespread
information are the clergy andnobility.
One can argue.
We have, frankly, a largeconversation about this.
Now, with social media, thereis information everywhere no
(09:57):
moderation of it, by the way, soon and so forth, and we have
the same, similar debates Again.
I argue that they're differentin some ways.
One can argue in the 1400s itwas a little more about control,
I think.
Today we worry about peopleyelling fire in a global way,
which happens, I feel like everysingle second, but we have to
look at what is the same andwhat is different.
So I think that's one, and Idon't mean to start so globally,
(10:19):
but I think it's reallyimportant.
I think two, what is the sameand what is different?
There are two things that Ilook at that are the same and
different, and I'll talk aboutthis a little bit and I'll leave
on some of my personalexperiences.
Is one, the speed.
(10:39):
So if you look at takeelectricity electricity, at-home
electricity it took ChatGPT anumber of months to reach the
level of adoption that.
It took electricity 80 years.
That is a level of speed ofdeployment that we have never
seen.
If you look at the internet,the internet basically took
about 12 years, 12, 13 years toreach 80% of the American
(11:01):
population.
It took electricity 80 years toreach that.
So the speed that thesetechnologies are being deployed
is really, really it's differentand that is, I think, part of
why it seems so disorienting.
We're used to having more thaneven the internet.
I mean, I remember this as akid and we didn't have these
types of conversations.
Now that we're having, now,that kind of wish we had, which
(11:21):
we had the same types ofconversations now that we're
having now, then I kind of wishwe had.
I wish we had the same types ofconversations about social
media back in the mid-2000s thatwe're having now about AI.
So the speed.
The second thing is this is oneof the first technologies, the
first technology that I canthink of where.
So I used to.
When I used to work at theadvisory board, we used to sell
SaaS tools.
(11:41):
I was a product manager, soliterally I would work with
sales teams, I would work withengineering teams to figure out
what this thing should look like.
So when I went to go sellsomething, I knew what it could
do and my job was to convinceyou that it could do what I told
you and then we'd build it foryou and it would hopefully do
that thing.
Generative AI, by the way, thatis the new part of ai.
(12:03):
Ai has been with us and beforethat, machine learning for 15 to
20 years, you know.
For example, if you look atbank loans, no, no human being
has made a decision aboutwhether you get a loan or not in
more than 15 years.
That's been done by amultivariate regression machine
learning tool.
The new part is the generative.
(12:24):
So if you look at generative AIand there's a great article in
Wired magazine and it talksabout the development of
Microsoft Copilot and it givesthis great example so Microsoft
Copilot is effectively the openAI powered AI tool within
Microsoft Word where you can askit questions and it gives you
answers or it can generate youan image or a video or something
(12:47):
like that.
So someone had a document andthey asked their co-pilot please
summarize this document?
And the AI produced a summaryand then they asked the question
differently and they said hey,as a friend, could you really do
a careful job creating asummary of this document?
It would be so important to meif you could.
And they got a differentproduct and the people who made
(13:10):
it can't tell you why.
And was it better?
I don't remember, but it wasdifferent.
Alex Kotran (aiEDU) (13:16):
Interesting
.
Chike Aguh (13:17):
And the question is
why?
Why would an emotional,peer-like plea to a machine
produce a different product?
I don't know the answer.
And so the thing here is thatthe people who build it don't
totally know what it all does.
In fact, that's why they did somuch beta testing.
Beforehand, they would give itto companies and say, hey, just
(13:37):
try it out and tell us whathappens.
This is new.
Generally, when a new technologyhas come about not to say that
we always know how it's going tobe used, but we know its limits
, we know its extent this isdifferent, and in some ways,
it's why the work that you'redoing is so important, because
it requires, frankly, morecreativity, more agility, more
practice than other technologiesthat we have used in the past.
(13:58):
The thing that I think the mostabout is less my time as a
teacher.
The most about is less my timeas a teacher, just to be candid.
Um, I think during the timethat I was teaching I don't know
if we were teaching in much ofa different form factor than
they taught in 19, in the 1920swe used a lot of white.
We used I used to say we,instead of using blackboards, we
use whiteboards and smartboards.
But one can argue, the formfactor of instruction was the
(14:20):
same students listened, teacherstalked, um.
I think we had a greatclassroom, but I think we were
at the very beginning, if eventhen, of how can education look
different, the work that SalKhan has been doing for more
than 10 years.
At this point, and many otherunnamed heroic educators are
(14:42):
thinking through how can this bedifferent to fit the, the, the,
the modern age.
I think about my time as theCEO of everyone on is one of the
biggest projects that we wereengaged in was was sprint, to
basically deploy tablets andwireless internet to underserved
schools across the country.
We probably deployed this wasthis would have been the year
2015 to 2018.
And we would say a lot.
(15:02):
You know, hey, this is critical.
If you don't give kids accessto a broadband and a device, you
don't have access to education.
And then the pandemic happenedand then everyone understood
very painfully.
The biggest thing that Iremember was going into
classrooms and you generally seea bell curve.
You would generally see therewere some teachers and families
(15:24):
who didn't know how to use it,didn't want to use it, it was
too new.
That was a minority.
Then you had a majority whowere using these in the same way
.
They had used everything andeffectively.
Again, instead of reading a book, I'm reading a tablet, and then
you had a small collection offolks called Two Standard
Deviations Above, who werereimagining how, how do I help
(15:46):
students get this informationand get the 19th invention, this
learning?
Because what they alsointuitively realized is the
information is actuallycommoditized because,
effectively, whether it wasgoogle then, or open ai or
claude or gemini, now, theinformation is at my fingertips.
The question is can I reason,can I take that and use it in
ways, um, that are novel or andcan I sense, make?
(16:11):
Can I see all this disparateinformation, whether it be an ai
, you know, a hallucination or adeep fake on social media, and
say, okay, there's a kernel oftruth there, but the rest of
this is nonsense?
Uh, actually, that's actuallyfairly correct.
I disagree with that thing.
Um, that, I think, in amicrocosm, was a challenge that
we see, which is how do we nottake the same tech like new
(16:31):
technology and use it in theexact same ways?
We use the old technology butand maybe we get incremental
gains in efficiency, but notthese quantum leaps in impact
and and uh and achievement?
I think we're beginning to seethat now, because what you have
on the generative side is so.
For example, I think about whenI was a teacher I thought second
grade the amount of time Ispent prepping my lessons and
(16:54):
not doing anything crazy, butmaking worksheets trying to find
.
You know we had.
I think we had.
We didn't have a smart board.
I had a projector and a speakerwhere I could project things,
finding the right video At thattime.
This is in 2006,.
Youtube is still a fairlynascent platform with nowhere
near as much content.
If I'd had chat GPT, then Icould have cut that time in half
(17:18):
, by half or two thirds In termsof preparation for my student.
I could have used that timeactually more with my students.
So I think what's reallyinteresting, I think the first
impact is how does this notactually impact the student, but
how does it impact the educator?
Yes, and how can they do theirwork better and differently?
And they're basically justefficiency gains.
(17:41):
Then there is the and if youmake the assumption that
everyone has a device, everyonehas internet, okay, now I can
reimagine how I do this, forexample, in math class, if we
were asking kids a question andwe wanted like all right, what's
four times two?
We'd give them all whiteboardsand they would write with their
marker and they'd put it up andI could see really quickly, but
(18:02):
I never recorded that.
If I had a device and I coulddo a version of that and then
actually record those responsesin real time because they would
be on the same platform, so on,so forth, that's data that I
could.
That's that's what, what, whatwe would call um, uh, formative
assessment data that I could useto actually get a real-time
kind of understanding check.
Or I could say, um god, Iremember we did like.
(18:23):
I taught social studies as welland I remember we had to do
basically what we call.
I forget what we call it.
I almost said like geolocation,basically it's the teaching a
seven-year-old that you live ina neighborhood, in a city, in a
state, in a country, in a world.
Alex Kotran (aiEDU) (18:41):
Do you have
Google Maps?
Chike Aguh (18:44):
Nascent at that time
time, but less the geography,
but more so like that concept,yeah, that you live in this
thing that you can see, and thenthere's this bigger entity that
you're also a part of, that youcan't totally see.
And then, of course, nowgetting to to this, and so we
asked we had them like make acountry.
So like, hey, we're, you'regonna get to make your own
country.
And so I remember like, and thekids love this, they love this,
(19:07):
they got to make a flag, likemake it whatever you want.
And of course they did it withcrayons and stuff and and and.
What I would have loved to dois because I put them in groups
and they had to agree on whatthe flag you know look like.
So getting five, seven yearolds to agree on what their flag
should look like challenging.
Usually one kid just grabs the,grabs the crayon, it goes from
there.
But what I would have done wasI would have said each of you on
(19:28):
your tablet with chat, gpt,make your own flag.
And then I basically would havebrought those all together for
each group and say chat, gpt,combine these into one coherent
flag, and I would have gottensomething that either would have
been awesome or an absolutemess, but it was their flag.
So these are these just smallexamples of how I would have
used this differently as ateacher to change how I taught.
(19:48):
I had another friend.
He was a math teacher.
Actually, he still teaches thisvery day.
He's almost 20 years and heturned his class on its head.
So in most classes you walk in,they tell you what you're about
to learn and then they show youhow to do it.
You try it yourself, you screwup the first time and then you
work it out together andhopefully by the end you get it.
(20:09):
He did the exact opposite.
He would put a problem on theboard that the students had not
seen.
He had not taught them thebackground.
They would walk in and he wouldsay solve that problem, you
have 15 minutes and the kidswould just get together and try
and figure it out and whatinvariably happened was like one
or two kids kind of got it moststruggled, and then they would
walk through okay, what is it?
(20:29):
What are you missing here?
What don't you understand?
And then from there he wouldback into the concept and then
so on and so forth.
Unbelievably powerful, had someof the highest math scores in
his school, in the state he wasdoing that in 2006.
And I think about what he couldhave done with tools like this
to turbocharge that and, ifanything, go even faster and
(20:51):
better.
So that's how I think about howthis strikes in schools, but
one we've been here before.
What's different is the speed,and what's different here is the
impact on the helpers, and Ithink our challenge is how to
use this to do this.
Not just the same that I didbefore, but faster, but actually
(21:15):
in an entirely different way.
That gets you quantum leaps,not incremental leaps.
Alex Kotran (aiEDU) (21:18):
Yeah, this
is my producer's in the room and
I'm kind of I'm grinningbecause it's such a validation
of this format, because Iabsolutely this is absolutely
the the level of depth that youknow I don't think we get to
when I, when I think about theprimary way that thought
leadership happens in, in myspace at least, you know, K-12
education leadership it's it'spanels at conferences, maybe a
(21:42):
fireside chat at a roundtable,and even then it's.
I've done many You've done many.
And for various reasons, youoften will have five or six
panelists once you're gettingten minutes of time, just enough
time for somebody to be able togive you their headlines.
I think this is really whatteachers need to hear, because
(22:02):
what you've actually describedto me is, in the abstract,
precisely how we.
This is really what teachersneed to hear, because what
you've actually described to meis um sort of in the abstract,
precisely how we envision, youknow, the future of education
with ai.
Everything you've describednone of this is like ai is going
to automate or replace the roleof the teacher, or like
supercharge and and allowteachers to, like you know,
completely skip all this workthat they're doing.
(22:23):
It like what you're actuallydescribing.
Actually describing is stillvery teacher-centric.
You're talking about this asefficiency that creates more
space for the teacher to do whatthey do best, which is actually
not just knowledge transfer,but skills development, durable
skills development, and that'swhy we you know when, when, when
(22:46):
we look at, we were asked, youknow, like, how can I use chat
gpt to do xyz in the classroom?
Or how, like, what are theprompts that I should learn as a
teacher?
Uh, so I can do a better jobteaching?
We really try to move, push,you know, push back from like
don't think too much about theprompts themselves.
Like, think about what yourultimate end goal is.
Some of the things youdescribed, I think, is what we
(23:07):
would describe as those endgoals.
It's like okay, how do youencourage more student
collaboration?
You described an example ofwhere ChatGPT is a tool, not the
only tool.
The flag activity you describedcould absolutely work without
students having access toChatGPT.
Chatgpt could make it faster,maybe more interesting and maybe
a level of depth.
Maybe they're not just creatinga flag, maybe they're also
(23:28):
creating the constitution andthe charter.
The work product might change,the process is still the same
and I'm sure you saw this in NewYork City public schools.
We've worked in New York Citypublic schools.
We also work in.
Prince George's County actuallyjust down the street, these
(23:49):
really large districts.
The teachers are brilliant.
They just don't have thebandwidth.
So you go in and you say, oh,we need more project-based
learning, we need to buildcritical thinking skills and
these poor teachers are like I'mbought in.
You don't need to convince meneed to convince me.
We've done surveys of teachers.
Like 95 plus percent ofteachers agree but one critical
thinking, communication, rightwhen.
And also, I'm not measured as ateacher on whether my students
(24:09):
are building collaborationskills.
I'm not measured as a teacheron, you know, critical thinking
skills specifically.
Now there are certain standardsthat you might argue connect,
and that's what we try to do isactually say, okay, look, we
can't change the standards youknow, as quickly as we'd like to
.
We we have in some states, buta lot of the stuff that you're
(24:30):
describing, there are someconnection points who teachers
are already having to do.
So anyways, this is just a tipof the hat to what you just
described.
I want to talk more abouteducation.
I do want to go back to thisjourney through history.
You went as far back as the15th century.
I hadn't really thought of theprinting press in my narrative
(24:51):
of this, but I love that I'vereally thought about the first
industrial revolution the loomthe steam engine.
You talked about the telegraph.
It is interesting.
I wasn't.
I obviously didn't.
I wasn't around in those times.
I was around.
I remember the first time Iused a computer.
I remember the first time I hadaccess to the internet, the
(25:14):
first time I printed a DragonBall Z photo and it was the
wildest thing that I could takesomething that was virtual and
just like a physicalmanifestation of that
manifestation, without I tapedin my bedroom.
Um, I certainly as a child, myparents, my teachers, there was
never really this metaconversation about this moment
that we're in, nope, and itfeels different now and I wonder
if that's just because we'vehad a lot of revs or or the
(25:37):
velocity.
But but I am kind of curious,like what?
What do you see?
Is the reason that now societyis there seems to be a lot more
self-awareness about like thisis going to impact a lot of
things.
We don't even know exactly whatit looks like.
There's a lot of you know, evenjust this, this steering
committee that I'm a part of,and we were thinking through
these big picture questionsabout, like, how is AI going to
(25:59):
change?
Will AI dramatically transformeducation, you know, in the next
five years?
I'm just reflecting now that Iwonder if those questions were
asked.
Chike Aguh (26:11):
No, they were not.
I can answer that for you.
The answer is no.
A couple reasons, actually,someone brought this up at
another conference that I was at.
It was actually a powerfulpoint One this is the first
transformative technology ofthis sort that did not come from
(26:32):
some initial research from thegovernment.
If you think about every otherone of these technologies, maybe
short of the printing press,maybe the steam engine, but I
would even say there I can findyou a piece of government, which
is the internet, obviously.
Um telephony, obviously,because, uh, that that came out
of bell labs, uh, which is partof at&t, which was a government
(26:55):
sanctioned monopoly.
Um radar, which one can argueis the most important invention
of world war ii after the atombomb.
Right, everything came out ofgovernment, so there was some
visibility by somebody.
I was in government when chadchibouti hit and we all were
like what is this?
That?
Alex Kotran (aiEDU) (27:13):
is that to
say in government, as an
institution, is oriented towardslonger term thinking?
Chike Aguh (27:20):
or no, I would say.
For some of these othertechnologies, people were
surprised when they came out, ordazzled, but there was a bigger
collection of society that wasaware it was coming.
Shia Jabeti caught everyone andagain, as someone who sat in
government, again, people ingovernment were not surprised by
(27:41):
the Internet.
Folks have been working on theInternet since the early 80s,
late 70s, no one was surprisedby computers we've been working
on computers since world war ii,with, you know, the turing
machine and things like that.
So, number one this truly cameout of nowhere, or so it seemed,
even though you know shit.
You know open ai has beenaround since 2014.
So I think that's one.
I two its capability shocked alot of people.
(28:05):
And again, think about earlychat.
Gpt got a lot of stuff wrong,produced things that were
unusual, but it was doing thingsthat you know.
As a kid, I watched Star Trek.
I love Star Trek.
I've watched every episode ofthe original Star Trek, the Next
Generation Star Trek, deepSpace Nine, star Trek, voyager,
literally every single episode.
These are things that in ascience fiction show, they would
(28:27):
ask the computer to do.
When it did and we thought thiswas oh, that's crazy.
It's science fiction, but it'sdoing it imperfectly, but
getting better, so on.
So that's number two.
And then, I think three, thegenerative piece, the ability to
.
The same way, when you printedsomething for the first time,
that was a big deal for you.
The fact that I could say, makeme a cat wearing an orange hat,
(28:51):
you know doing ballet in thestreet, and it could make that,
you know, pick Dali or pickGemini, that was astounding.
It was something that we didnot think was possible and even
for and I would say the otherthing is, lots of companies are
(29:11):
doing generative AI, but thefact that it came from not
Google, not Microsoft per se,but it came from not written,
not, you know, not at first,that also shocked a lot of
people.
The fact that Google, fromeverything I read, was caught a
little bit flat footed on thatscore.
And, by the way, it's also whypeople didn't know If this had
come out of Google.
There's no way someone in a joblike I used to have would not
(29:31):
know why, because we talked toGoogle all the time.
Most governments would acompany of that size.
The fact that this came fromOpenAI, which people were aware
of, but not to the same extent,just again made it really
surprising.
But because it could do, itcame so fast, it seemed to come
(29:53):
out of nowhere and it could dothings that we'd previously
thought weren't possible.
And the last thing, going tothe last point of so, when they
would go to OpenAI and say, hey,so tell me what this thing does
they're like, I can't totallytell you.
I mean the model, I mean thenew model has what?
Eight billion parameters.
Alex Kotran (aiEDU) (30:09):
Wow yeah.
Chike Aguh (30:10):
So the issue is like
tell me what it does.
I don't totally know, and thatis.
And now, as we get into the,into the age of agents you know,
I'm hearing 2025 be the age ofthe ai agent.
There are things that areexciting and things that are
disconcerting about that,because I don't know what it can
do.
The internet I knew, I knewwhat it could do.
(30:31):
Social media I know what it cando.
I didn't think about all theapplications, but I know what it
can do.
But this, oh, I actually don'tknow.
And that is, I think, partiallywhy I'm glad we're having this
conversation.
And the other thing I'll say isum, and and this is a credit I
give to president biden, I thinkhe got he and a number of other
public policy leaders on bothsides of the aisle actually were
(30:53):
like oh, this is not simply theprovince of the private sector,
this is actually a thing that'simportant to the entire
American society.
And so I was there whenPresident Biden put out his
executive order on AI.
We've had a number of governorsdo the same thing, a number of
senators who are trying to gettoward what AI regulation should
look like.
I know there's debate aboutwhether that should exist or not
(31:13):
, but I think it was the firsttechnology that I have seen
where, at the inception, publicpolicy leaders said no, we
actually this is not just aboutwhat these companies are doing,
it's not just for them to figureit out, we're not going to.
And and I think that's partiallycame from the fact that there
have been a bunch of othertechnologies that we kind of
just let go, and it led to someplaces that we probably don't
(31:34):
totally feel great about today.
Social media is the one thatkind of pops to mind the most,
and so I think that's partiallywhy we're in a different
conversation than we've been inin past.
Can I tell you we'll totallyend up in the place it should?
I can't, but God, I wish we hadhad this on social media In
2008,.
I wish we'd had thisconversation about the internet
(31:55):
in the mid-90s or any of theseother technologies that were as
transformative.
Alex Kotran (aiEDU) (32:02):
I wonder.
So social media is interestingbecause you almost start to get
to a this divergence from, ifyou think about, maybe the
printing press is a bad example,because there actually was
quite a lot of disruption to thestatus quo.
But electricity, cars, atelegraph, computers, the early
(32:28):
internet I think the innovationwas directly coupled with
progress, with benefits tosociety.
Social media was, I think,maybe that first time that we
started to see sort of very.
I mean, there's obviously a lotof broad externalities, right,
we think about global warmingand sort of but in terms of this
(32:49):
tangible sense of like, wow,this game-changing technology
that had so much promise, youknow, also had all these sort of
like externalities that wehadn't predicted.
It was very tightly coupled tohappen within a matter of, let's
say, five to 10 years.
And so I wonder if thatproximity to you know the
everybody sort of looking at allthe brain rot, and you know
(33:12):
TikTok and Instagram reels andyou know increasing rates of
childhood depression andself-harm and suicide rates, and
there, you know, even before AI, people were starting to feel
like I don't know if technology,if social media, necessarily
made our lives better.
It changed the way we consumemedia, sure, and then AI comes
around and I almost think wewere kind of like oriented
(33:34):
towards this kind of like, alittle bit more of a skepticism
about, just because we have thistechnology, is it really going
to live up to the promise?
Because social media, when youwere launching and when you were
launching your work at AlwaysOn this, was like 2015.
I mean, that was kind of theheyday, almost like the peak of
(33:54):
a lot of those companies.
Chike Aguh (33:56):
I would say that,
from an evaluation point of view
, from a social regard point ofview, I think your point on that
there's this very tangible harm.
Now, if we go back we can takeand again, I came from the
Department of Labor, so I thinkabout some of these.
If you think about automationmore broadly, you know, uh, the
(34:16):
reason that we have OSHA uh isthere's a famous uh fire in uh,
new York city called thetriangle fire.
It was a garment factory almoston the lower East side and
effectively, um they it.
They were basically makinggarments and they kept flammable
stuff too near other stuff andit was an enormous fire and
(34:38):
because we didn't have workerrights legislation, they would
actually lock the doors on theworkers during the day so they
couldn't leave.
So this fire happened.
You can read descriptions ofthis.
You see burning women jumpingout of third floor windows to
try and get out, and so we hadto have a conversation on, ooh,
this automated factory thingthat we have which has produced
(35:01):
all these goods that we all love.
Alex Kotran (aiEDU) (35:03):
Shoot,
there are people who are being
exploited and who are suffering.
Chike Aguh (35:06):
When we think about
the thing that I think, this
technology, the harm that peopleare worried about, are a couple
One people think about thedeindustrialization of the 1970s
, where when factories all overthis country, whether it be in
my home state or aroundBethlehem Steel, used to have to
(35:27):
have a site right north, rightwest of Baltimore City, which is
not there anymore Now a hugeindustrial park because
Bethlehem Steel is not aroundthat that would come back and
increase, I think.
Secondly, we'll call it a 1Athe people who would?
be affected by that aredifferent and more widespread
than are used to.
So, if you look at it, what'sinteresting and it says a lot
(35:48):
about AI AI generally when we'vehad technology you think about
the people who are quote-unquoteharmed or who lose.
It's generally people who aredoing routine work or work that
you do with your strength,generally low-paid work.
What's really interesting isthat AI is actually not good at
simulating that.
The reason why self-drivingcars are not on the road yet is
because it's actually reallyhard for a car to get all those
(36:10):
movements right and then alsohave the hand-eye coordination
to know that's a squirrel versusthat's a human, and so on and
so forth.
What AI is really good at?
Alex Kotran (aiEDU) (36:17):
generative
AI reading and writing.
Chike Aguh (36:21):
And the people who
do that for a living are under a
threat they've never been underbefore.
I just made a website and I didnot hire anyone.
I had ChatGPT make the mock-upsand I asked it to.
I wanted to see if it could do.
It Produced me all the back-endcode and it did, and that's it.
I've heard a lot of nonprofitsdoing the exact same thing, and
(36:44):
so this is a.
This is what's different.
So it's not only that it'screating a harm.
It's creating a harm for peoplewho aren't used to being on the
receiving end of harm.
Alex Kotran (aiEDU) (36:51):
The people,
the people who are actually in
a place to drive theseconversations.
If you're a lawyer who doesdocument, review.
Chike Aguh (36:59):
This is a challenge
for you.
If you are an entry-levelsoftware engineer, this creates
a challenge for you.
Alex Kotran (aiE (37:08):
Communications
manager.
I started my career draftingpress releases.
Chike Aguh (37:16):
I've had this
conversation with journalists
particularly yeah, um, so, sothat's also new, I think the
other thing is, so that's like afear we've always had.
It's now it's come to life.
Then you have a fear of that.
Now you start thinking likereal, real bad, which is the
whole.
You have bad actors out in theworld and can this turbocharge
(37:37):
them too?
Uh, whether that be frommisinformation to you know.
You know, if you look at allthese models, if I believe there
are parameters in those modelsthat disallow you from doing
certain things.
So, for example, design for mea pathogen right that would do x
, that would create x horribleharm.
They're like nope, you can't dothat and there's a reason
because we have people who don'twant to.
(37:57):
Um, uh, so that's just oneexample.
And then I think the last pieceand I think this is where it
does tie to social media is oneof the things that I worry about
, and I think we saw this afterparticularly the 2016 election
is.
I worry about and I think wesaw this after the 2016 election
is can technology dissolvecivic bonds and increase the
(38:21):
distance between us as people?
We saw this in social media.
You know where I used to say,the Internet has the ability to
divide and to unite, and I thinkwe've seen both.
And for AI it can be the samething, and I think there's a
huge worry about that.
If I, for example, don't needto hire someone to do that
website, that is a socialinteraction that does not need
(38:46):
to exist anymore.
Alex Kotran (aiEDU) (38:49):
I hadn't
thought about the fact that work
is going to become moreautomated, even if humans are
augmented.
I see what you're saying.
Actually, it's breaking downthese built-in social
interactions that are a part ofyour day.
I guess you don't really thinkabout working with your designer
as a social interaction, but itvery much is Again.
Chike Aguh (39:09):
we see this with
e-commerce.
Again, for those who are of acertain age, amazon, when I was
a kid, only sold books.
Alex Kotran (aiEDU) (39:18):
You could
have Best Buy.
You could have Radio Shack.
Chike Aguh (39:20):
Yeah, these are
places that some of you who are
too young will not remember atall, but yeah, that was a social
interaction that in many waysdoesn't have to exist anymore,
or from the beauty of it.
So there's the other piece thatwe worry about and, by the way,
it's compounded by the others.
If you have kind of economicdislocation, if you have bad
(39:41):
actors that you're trying tomitigate and you have increasing
social distance, the recipepotentially is fear and
polarization, and in some wayswe're living parts of that now.
Alex Kotran (aiEDU) (39:51):
Yeah, I
mean this is Because the other
thing that I'm thinking aboutwhen you describe, you know, the
world before amazon.
You know I've been to a bestbuy recently, um, and the
there's still lots of folksworking there.
But it's a very differentexperience because when I used
to go to, like radio shack, thepeople who worked there were
like truly knowledgeable, likeoh, you're trying to film a
(40:12):
podcast.
When you're like come over.
Chike Aguh (40:13):
Let's look at things
they would tell you all like.
And now?
Alex Kotran (aiEDU) (40:15):
it's like
like why would you ask me?
You have the internet, I meanyou have chat gbt, like I like,
and so there's almost um, eventhe people who are still working
in roles that the the level ofexpertise decreases or the the
demands of their expertise.
Um, I want to sort of startsteering us.
I want to.
I want to stay on sort ofhistorical frame.
I want to start steering alittle bit back to education.
(40:35):
So the lens I want to use is sofor for teachers, parents, ad
leaders, who are listening tothis.
They're like, well, what are wesupposed to do?
What am I supposed to tell mykid or my students?
Yep, the one thing that I see,that that I'm hearing from you
(40:56):
and that I've heard from everyother credible expert and
economist that I've talked to onthis topic, is we really just
don't know.
We can't say in 15 years, theseare the jobs that you need to
go towards.
Five years ago I think we couldsay credibly computer science,
stem, data science, those hardsciences.
(41:18):
You're going to be set Lawyer,doctor If you can get onto one
of those paths.
And a lot of the conversationin the social sector was how do
we get more students?
onto those pathways.
But you're talking aboutknowledge work, you're talking
about some of the things that AIcan do, and I think there's
(41:40):
this first question of is STEMstill the be-all, end-all?
And so, before you answer thatquestion, I still want to zoom
out, but that's sort of the metathing we're trying to get to.
So let's go back to the, thefirst industrial revolution.
You talked about this, thetriangle fire, yep, um, because
the interesting thing is priorto the invention of the loom and
(42:03):
the steam engine and, like youknow, mechanistic automation,
you have these seamstresses whoare working in cottage
industries and they were likethey were true experts.
I mean, they were creatingentire garments for clients.
They were working from home,like OG work from home.
They had this rounded expertise.
They could create the entiredress or the entire suit.
(42:25):
Team Engine Loom comes alongand they lose their jobs.
But the story about the firstindustrial revolution is not one
of job loss.
It's one of job creation.
All of these jobs were created.
Chike Aguh (42:38):
And I think that's
often.
It's the same conversationaround the ATM and the banking
industry.
Alex Kotran (aiEDU) (42:40):
Yeah, and
so a lot of times there's a bit
of a waving away of the worryabout AI-driven job loss,
because we just don't know whatjobs are going to be created.
And I actually believe net-net,there's probably going to be
created.
I actually believe, net-net,there's probably going to be
more jobs created than lost.
What I worry about is, I don'tthink it was an upgrade if you
were a seamstress working fromhome, if you're now in a factory
(43:05):
where the doors are locked andthere aren't any windows and
it's filled with smoke.
Chike Aguh (43:10):
Not the case anymore
, or at least mostly Not the
case anymore.
Alex Kotran (aiEDU) (43:14):
But I
wonder if AI is simply learning
how to use the factory, learninghow to pull the lever, push the
button.
Is that sufficient?
Is that what we want kids toorient towards?
Because what I'm trying tounderstand is what are the
skills that will commandeconomic value in a world where
(43:35):
expertise is commodified, wherethe ability to write code or
create a website or designsomething AI can do that, and
what is the value that humanscan add, so that they're not
just the person sitting behindthe cell driving car making sure
that it doesn't hit thesquirrel?
Chike Aguh (43:50):
So let me try and
bring all this together and I'm
thinking about again yourparents, your education leaders
and you said it really well thejob, and I'll use an anecdote to
why I think their job is soimportant.
When I was in graduate school,I gave my master's in education.
We had a.
I had a Pav.
Pav was from Singapore and hehad been in the Ministry of
(44:15):
Education, had been a teacher,became a principal and now was
in the Ministry of Education.
We were in a class on teachereducation.
It was in 2009.
We asked him just one day heyPop, how do you all do this?
He said well, if someone wantsto become a principal in
Singapore, here's what we do weput them in a year-long
apprenticeship with a seniorprincipal.
During that year, we have themwrite two papers.
One of those papers is tell mewhat you believe the world will
(44:40):
look like in 50 years and talkto all the economists, the
political scientists, all thatas best as you can.
And the second paper is what isthe school that would prepare
your students for that world?
And that, ideally, is theschool that you should create.
And so, when I think about whatevery educator is tasked to do,
it is to prepare your kids forthe future and have as clear an
(45:00):
idea of that as you can.
Let me bring a couple of thesethings together and let me just
say maybe some paradigms firstto think about and then how do
you make that concrete?
The first paradigm it isimportant to understand and it's
why talking about ai is like Iwould argue, by itself is not
helpful.
You want to talk about it witha bunch of other economic trends
(45:22):
and let me just say the biggest.
The biggest is that the size ofthe american workforce over the
between now and the end of thecentury will stay the same or
shrink.
Why?
Because Because in Americaright now the average couple has
2.1 kids.
Excuse me, needs to have 2.1kids to keep population level.
The average couple right nowhas 1.6.
(45:43):
That's just math, that's one.
Secondly, because of that meanswe're going to have a tighter
workforce.
We call it tighter labor market, meaning there's just not as
much flex.
There's people who are notworking, which in many ways is
good, but it actually leads toshortages.
It actually leads to jobs notbeing filled, even if we hired.
Just to give you a sense of thedata right now in America there
(46:06):
are nine people looking forevery 10 jobs open.
Currently, in my state ofMaryland, we have one person
looking for every two jobs open.
In my state of Maryland, wehave one person looking for
every two jobs open.
And when you borrow intocertain industries, ones that
would particularly need filled,like healthcare, semiconductors,
quantum parts of AI, the ratiosget crazy in terms of the
amount of jobs that are open andwill be open versus who we have
(46:27):
and are going to have.
That's one which means a coupleof things.
One I always say we need allthe people and all the ai.
We actually need to hireeveryone, and even because we
hire everyone, I need to makeeach of them factors more
productive to fill the peoplethat I don't have.
Um, ideally, the way that mostcountries solve that issue is
(46:49):
immigration.
I'm not sure if you've watched,um, what's happening down the
street from where we are here inWashington DC.
That's not going to get solvedanytime soon.
That is a shame, by the way,partially because of my own
background, and it's an.
It's an economic opportunitymissed, but, candidly, I don't
expect us to solve that anytimesoon.
Therefore, we have what we have, which means we need all the
(47:10):
people and all the AI.
That's number one.
Number two, when we think aboutand actually let me go back to
number one 1A and the way that Ithink about this is, when I was
in the private sector, we usedto talk a lot about doing more
with less meaning, let me makeincrementally more revenue with
(47:32):
less people, less resources.
The paradigm that I have saidthat we want to move to is doing
a lot more with a lot more, andthe first a lot more is quantum
leaps more impact, quantumleaps more revenue.
And the second a lot more isthat combination of people and
machines.
We're talking about AI here,but there will be other
technologies that come down thepike.
That is the choice that we needto make.
Secondly is we're talking aboutskills, lots of skills that we
(47:55):
can talk about, but I just putthem in two buckets.
First bucket is what I calljust-in-time skills.
These are these technicalskills that you learn to do your
job.
It is the new procedure, thenew software, the new protocol,
the new regulation, the new toolthat you need to do, and those
skills change really fast.
But they're also ones that youcan learn in a very didactic,
very classic classroom way.
(48:15):
As time goes on, AI will domore of those things.
It can't tell you how many more, it can't tell you at what
speed, but they will do more ofthat, which makes the second
bucket of skills as important.
Some people call them 21stcentury skills, some people call
them employability skills.
I call them 21st century skills.
Some people call thememployability skills.
I call them timeless skills.
(48:36):
These are the skills that humanbeings have been doing for
thousands of years, fromleadership to communication to
conflict resolution, and thething about those skills is that
you don't learn them reading abook.
I analogize them to being anathlete.
I don't care what your sportwas Basketball, football, field
hockey, hockey, soccer, shootinga jump shot, throwing a
football, uh, hitting a baseball.
(48:57):
You didn't watch a video aboutit solely, you didn't like go to
a class about it.
You did it.
You did it first in practicewith a coach, and you did it in
a risk-free environment, andinitially you you sucked, and
then you got better throughpractice and coaching and then
you got good enough where thecoach is like, okay, I trust you
to do that during a game wherethere are stakes.
(49:17):
Then during the game, youweren't as good as you were in
practice and the coach stillwatched you when you went back
and forth.
You do that cycle over and overand over again.
That is how human beings getgood at the timeless skills.
You don't learn conflictresolution from a book.
You don't learn conflictresolution from a book.
Yes, you read some things.
You hopefully get some practiceand then you get into a
conflict and you have to resolveit.
(49:37):
Probably not perfectly thefirst time, but over time you
get better by doing learning andcoaching, which makes the last
thing I say, I think, reallycritical, which is right now the
biggest problem in education.
It is too much students listenand teachers talk.
What makes me hopeful?
In other countries I'm seeingmore and more schools, say from
(50:00):
K-12 to higher ed work wherestudents are actually out there
in the world doing a thing tiedto an academic purpose is
becoming more and more of whatschools are doing.
I think about the BloombergPhilanthropies funding a quarter
billion dollar initiativeschools are doing.
I think about the BloombergPhilanthropies funding a quarter
billion dollar initiative wherethey're working with 10 high
school districts around thecountry, where students will
(50:20):
spend part of their week inclassrooms, part of their week
on site at a hospital, and sowhen they graduate high school,
they are literally graduatingwith a diploma, a certification
and a job.
And, by the way, they're notjust learning, for example,
about nursing or phlebotomy.
They're actually learning whatyou learn in a class, which is I
don't like my supervisor, orhow do I deal with that, or this
(50:41):
patient's really mad at me.
How do I resolve that conflictfor them?
Or I have to give apresentation.
How do I do that when it reallymatters?
And when you bring all thattogether, there is a world and
maybe this will be the lastthing that I say you talked
about.
There are these two worlds,potential futures.
One of them is this futurewhere the AI takes all the jobs.
(51:02):
There's no use for human beingsanymore, and that's not a world
necessarily that most of uswant to live in.
There's another world whereit's all techno-optimism and
it's all great.
I think the world that weactually are going to live in is
one that we have to choose, andthat choice is we make the
choice that we want to live in aworld where the ai and machine
(51:24):
work together, reuse the ais tooptimize the people so that we
get the what I believe is themost gain possible.
But it is a choice.
It is not one that will be kindof a fait accompli.
I think the one place where Ikind of ding my technical
optimist friends is where I sayyou kind of assume this will
happen.
I don't assume anything.
If this world that you want isgoing to happen, it's going to
(51:45):
happen because we choose it and,by the way, it is not an easy
choice to make because, candidly, the people who are lucky and
have capital and money, theywill make money even from
outcomes that actually are notgreat for everybody.
We actually have to make achoice, and by we I mean truly
the royal, we, business,government, the nonprofit space
(52:05):
and everyone in between.
This is the future that we wantand therefore this is what we
have to create.
And part of what we have tocreate is this paradigm, which
you've been working on again forgetting close to a decade of.
We have to put these tools inthe hands of everyone,
strategically, but also soon.
One thing I say to schools whenwe're at tour think about this,
and now let's get tactical.
(52:26):
If you're a school, this meansa couple things for you.
One start something simple soonwith AI with your kids.
Don't do it everywhere, becauseeven businesses are learning
that.
Oh, I shouldn't apply thesetools everywhere because it
doesn't make sense everywhere.
But you cannot say school willbe an AI-free zone.
That is a mistake.
You disserve your kids andparticularly I worry about, you
(52:49):
will disserve the kids who havethe most challenges in front of,
because the kids who are doingfine, they will get exposed, but
the kids who don't are notgoing to.
So that's one.
And then again, let me say a 1Ahere.
If you were hiring someone you,alex, you, anyone else you
wouldn't hire someone whocouldn't use Microsoft Office,
(53:10):
you wouldn't do it.
You wouldn't hire someone whocouldn't use Google Search.
These tools in the next 12months will be as ubiquitous as
those and you will be expectedto do that.
And if you don't prepare kidsfor that, you are disserving
them.
That's number one.
Number two how, as a school,are you integrating AI in the
classroom with work andexperiential learning outside
(53:32):
the classroom?
Alex Kotran (aiEDU) (53:33):
Right,
because, while it's absolutely
the case, I would not hiresomebody who doesn't know how to
use Google Search or doesn'tknow how to use Microsoft Word.
If someone showed up to aninterview and I was like, well,
why should I hire you?
And they said, well, I know howto use Google, I know how to
use Microsoft.
Chike Aguh (53:48):
Word.
Alex Kotran (aiEDU) (53:50):
I mean that
would be the end of the
interview.
It's like well, it's necessary,but not sufficient.
So what you're describing is thetools are absolutely critical,
but they're just one ingredientand what's more important is
sort of what do you do withthose tools?
And that's the role I thinkthat teachers play right.
It's like actually using thosetools as sort of like force
(54:10):
multipliers for thatexperiential learning, things
that build student agency,things that get students to do
the type of skills that youcan't just read a book.
I love that paradigm becauseyou don't learn how to ride a
bike by reading a book.
You could read 10 books aboutriding a bike and you're still
going to have to do what all ofus did, where you put the
training wheels on and figure itout.
Chike Aguh (54:34):
I've read a lot of
books on how to do a jump shot.
I do not shoot.
I do not shoot like Luca.
I don't because I don't do whathe does on a regular basis, but
continue.
Alex Kotran (aiEDU) (54:49):
No, I think
we're landing at sort of a
frame that educators can use,that maybe parents can use,
which is, um, how can I makesure that in the educational
experiences that I'm creating,that at least a component if not
the majority, but at least acomponent is is?
Is goes beyond knowledgetransfer, goes beyond just sort
(55:10):
of like building that, um, uh,like sort of rudimentary set of
skills, things that could besort of like built into an
algorithm is?
Well, it may be very hard forsomeone to intuit what AI can or
can't do today, let alone 10years from now.
I think it's, you know, thissort of like more meta
perspective of you know what,what are the types of things
(55:30):
that are kind of innately human?
And I guess the challenge isthat the answer to that question
has changed your point aboutsort of generative AI and, you
know, creating art, and soSomething else you said a little
earlier is the velocity of howfast things are changing.
It demands everybody that's apart of this equation to be kind
(55:55):
of like on the ball, to use thebasketball analogy again.
How does, how can educators andfolks that really care about
the next generation, how do youkeep up with all the stuff
(56:16):
that's happening, because I, II'm like, literally, my job is
to teach people about ai, and Ifind myself um consistently
overwhelmed and feeling uh likeI have no sense of what's going
on, and so, um, yeah, I meanwhat, do you have any?
It's a great questionrecommendations.
Chike Aguh (56:36):
I see a couple
things.
One, uh, again, because of thenature of the tool, I find
exploration is helpful.
Um, I got to work with a fewcompanies who got early, uh,
beta tests of microsoft, co-p,of GPT or Claude, and one of the
things was just for them topush it, see what it can do, ask
it a question that you don'tknow if it can answer, and see
(56:58):
what it and see what you getback.
But that consistent and again,it's like it's almost like
exercising, pushing it to itslimit and see, starting where it
starts to break.
Really important because you'llbegin to learn.
Okay, actually it's not good atthis.
We need to ask it this way.
Secondly, you know I hate sayingreading, but yes, reading, um,
you know, I have a google searchfeed where I, just when I see
ai news, so on, so forth, I kindof spend a couple minutes a day
(57:18):
on that, as well as another, abunch of other emerging
technologies, the, the.
The third thing is these toolsare actually great for learning
and I would would argue one canargue that the best skill that
you can give someone is how doyou teach someone to teach
themselves things?
And so I just came up with.
There are a bunch of greatprompts around.
So, for example, I need tolearn in the next two weeks, I
(57:42):
need to learn the ins and outsof quantum computing.
Can you create me a syllabusand a set of videos and articles
that I should read?
That would make that possible,that I could have a reasonable
conversation with a quantumresearcher?
Go and it will do it.
You will actually have a greatlist.
So actually use these tools tohelp do that.
(58:03):
And now I believe, at least witha GPT I know it does this a
clotting Gemini, but you'regoing to begin to give it
notifications.
So, for example, with GPT-TAS,for example, cloud has, I think,
a similar feature.
Hey, can you ping me anytime anarticle that I should learn on
AI?
If you have it on your phone itcan send you notifications.
But that last one, I think, isreally important because, again,
(58:27):
let's use the tool tool whichactually can um be used for for
things like that.
Because I think if I were ateacher today, that is the
number one thing I would beteaching my kids to use that
tool for.
Alex Kotran (aiEDU) (58:41):
Yeah, it's
like, it's like you.
You almost insulate a bit fromthe power of the technology.
Uh, by giving, by giving kidsthe ability to actually use it
as a tool for their ownself-improvement, because what
you just described in terms ofthis syllabus for quantum
computing, what you can also dois say here's my starting point,
I'm a high school student who'sactually behind and I actually
(59:06):
struggle with linear algebra,and it will actually tailor the
syllabus.
Chike Aguh (59:11):
And this is
important.
Alex Kotran (aiEDU) (59:14):
Because you
can spend hours going through.
There's plenty of lectures thatare online about quantum
computing from MIT, from Harvard, and you go sit and watch those
lectures and it's like and I'vewatched some of them it's very,
very hard to follow.
And so that's the buzzword ofpersonalizing education.
I think it really goes beyondthe buzzword.
I think there is a there.
Chike Aguh (59:34):
There.
The other thing I'll say is howis this Make the tool so?
Two other things.
I'll add One don't just use onetool.
So, like most of us, I startedon ChatGPT but I was using Claw
Gemini.
I use Copilot Because they'redifferent.
Do you have a favorite?
I have found certain ones aregood for certain things, and so
(59:58):
there are some.
For example, just take theCopilots.
I like Microsoft Word Copilot.
I like Microsoft PowerPointCopilot.
I think Excel I'm not quitethere yet, um, but same thing
with all the other kind ofdominant tools.
I think.
Learn to use them all.
Um, it's unlike you know, Ithink, what will happen.
(01:00:18):
Unlike the search engine gamefrom the late 90s, which no one
here will remember, but therewere.
Google was one of them.
There were a bunch of otherones that do not exist because
vista all theista.
AltaVista, hotbot, all these Ido not believe we're going to
get to like there will be one.
I don't believe that I believewe will have because there's
just so many companies and somuch capital.
But the other thing is make thetool ask you questions.
(01:00:40):
So, for example just take thatexample Make me this curriculum
for quantum computing.
What else would you need toknow about me to create an
accurate uh syllabus?
And it'll ask you hey, it'll behelpful for me to know your age
, your level of knowledge.
Like make it ask you questionsyeah, and this is the, the.
Alex Kotran (aiEDU) (01:01:03):
I think, I
think the.
The.
There's a fallacy where peoplesay, well, I want to learn
prompt engineering, so what arethe prompts that I need to
memorize?
There's a lot of where peoplesay, well, I want to learn
prompt engineering, so what are?
The prompts that I need tomemorize.
Chike Aguh (01:01:10):
There's a lot of
snake oil and people actually
selling this stuff online, ofcourse, oh here's the prompt
Bible, the stuff that you needto memorize.
Alex Kotran (aiEDU) (01:01:17):
And when
people ask me like, well, how do
I start learning promptengineering, I think what you
just described is precisely.
It's just like just ask, askthe LLM, ask the LLM, how can I
prompt you?
And it will literally give youthe cheat codes to design that
prompt.
But what you're pushing for isyou just have to get hands-on.
(01:01:38):
You literally have to pushyourself, and a lot of the times
.
I think what you're alsoalluding to is sometimes it's
inefficient.
Sometimes it's actually easierto just write the email yourself
.
Oh yes, sometimes it's actuallyeasier to go and do the
research, but that becomesintuitive once you've actually
had the revs and experience thehallucinations yourself and
(01:02:00):
experience where it's actuallyway faster to have chat, gpt or
I mean, businesses are learningthis.
Chike Aguh (01:02:07):
I think so.
For example, I was sitting witha with a researcher at MIT, and
one thing he said is like, look, if you're a business, um, you
have to be careful where youapply this, forget values and
all that just literally foreffectiveness.
Like so, for example, um, ifyou're applying AI in a, in a um
, in a situation where you need99.99% accuracy, in a situation
(01:02:27):
where you need 99.99% accuracy,ai is actually not the tool for
you because the exponential costto go from 96% accuracy to
99.99, it is exponential.
I didn't get that until heliterally showed me the cost
curve.
So that's, I think, and it'sthe same thing in life.
There are some things where, toyour point, actually it's just
more efficient for you to do it.
The other thing and actuallywe're having this conversation
(01:02:47):
with CJ and I is and now I'mspeaking about being a parent,
so I'm speaking to educatorspeople who work with kids
particularly make this aside-by-side adventure In some
ways.
You don't want the kids walledoff from it, and I also don't
want them in the AI wildernessby themselves.
You want them with a coach bytheir side.
(01:03:10):
The same way, if you were anathlete, your coach wouldn't say
just go shoot, jump shots andI'll come back and I'll see you
in a day.
That's not how it works.
Be their coach through thisprocess and, by the way, you're
going to be learning too andyou're going to feel stupid
every now and again, and that'sokay, but part of your job is to
help them make sense of whatthey are seeing and what they
are using and also make thatjourney a conversation between
(01:03:33):
the two of you.
One of the things that the AIcannot do yet, and I don't know
if it will ever truly do, isthat that relationship between
teacher and student and I useteacher in the writ large sense,
whether you be a parent, you bea guardian, you be anybody.
Teacher in the writ large sense, whether you, you be a parent,
you be a guardian, you beanybody is, um, that
relationship of like please helpme make sense of this and kind
of keep me safe and I'm you safeagain, and also the broad sense
(01:03:55):
is, I think, really reallyimportant and that's a and
that's just a critical um jobthat we all have.
I have, I have two kids myself.
My son is eight, my son isabout to be eight, my daughter's
three and, as I think aboutthem in this journey um.
That's also part of the jobyeah.
Alex Kotran (aiEDU) (01:04:11):
The
interesting thing, though, is
you're when you describe thatside by side.
I think in many cases, whatwhat transpires is the student,
the kid, is often showing theparent or the teacher absolutely
, and I've done, I did apresentation.
I went back to my alma mater,cobbly high school.
The two, two assemblies, onefor teachers, and I had like all
(01:04:31):
a bunch of my former teachersfrom from cobbly high school
were there.
It was very, very wide, it wasweird, um, and I did sort of a
similar version of this is sortof like a big picture like what
is ai?
What is?
What are language models like?
Why should you care about this?
And the teachers were liketheir eyes were like dinner
plates and and there were peoplelining up like oh, my daughter
is, you know, majoring inaccounting, like what advice
should I give her?
And the students were prettychecked out and at first I just
(01:04:54):
sort of chalked it up to like,yeah, high school students, you
know I never really cared aboutthese assemblies.
And then afterward there wassome kids, you know sort of
approached me.
We've seen all this stuff.
None of this is new.
Can I ask for a show of hands?
Who's using ChatGBT to helpwith homework or to cheat on
their homework?
A third of the group raisedtheir hand.
(01:05:16):
They were like everybody shouldhave raised their hand.
We're all using it.
To me that's new.
I think that also has created alittle bit of a fear among
educators because it's like theydon't understand it.
They feel like it's sort ofsomething that's out of,
something they can't control,which you can't um.
(01:05:38):
But I mean, what you're sayingand what I feel like we have
been pushing is there's only onesolution to that and you can't
ban it.
You can ban in schools.
They're going to use it out ofschool.
You can ban it in schools.
They're going to use it ontheir phone.
Chike Aguh (01:05:49):
Your only solution
is.
Alex Kotran (aiEDU) (01:05:51):
You have to
learn yourself.
Chike Aguh (01:05:55):
And you can't.
What I can't remember is how canyou ban a thing that the minute
they step outside of yourenvironment, like when they step
in the workforce, like I alwaysremember, there was this great
story, maybe like a year afterChatGPT came out.
There was this great story maybelike a year after chat GPT came
out and it was a softwareengineer and basically chat GPT
and I think that was the tool heused made him so much more
productive that he that he andhe was remote, he had three
(01:06:18):
full-time jobs and so heliterally made like 350 K over
like a year or like a year ortwo, and he's like and like he
paid for his kid to go tocollege, I think he paid off his
house and after a while hestarts to feel bad.
He was like should I tell themthat I have three jobs?
And in my head I'm like I don'tknow about that guy, but, um,
(01:06:39):
that there is a tool that canincrease your productivity.
Two to three x that you don'tuse.
That's that, that's not tenable.
And so I think, one parents,guardians, that broad teacher
category, you have to learnthese tools.
One Secondly, your job may notbe to teach them technically
what to do.
Your job is to help them makesense of this.
(01:07:02):
Your job is to help them.
What should I use this for?
What should I not use this for?
Should I write this myself?
It produced this answer for me.
Do I believe it?
And if I don't, how could Icross-check this?
That's actually where thatteacher category is so useful
and so valuable.
Alex Kotran (aiEDU) (01:07:23):
I'm going
to grasp for a metaphor here.
I'm going to grasp for ametaphor here.
So it's like, you know, there'sthis uncharted jungle that you,
the teacher, have never been tobefore, and you have to go
through this jungle with yourstudent, with your kids, and you
may not be able to guide themthrough.
(01:07:43):
You don't know what's in store.
You can make sure that theyhave waterproof boots.
You boots make sure they haveall the equipment that they need
can prepare them for what theymight encounter.
So they're not surprised.
And you yourself should pourover all the information you can
find about this, this new landthat you're exploring, and um it
(01:08:06):
, yeah, because I think there'sa big.
There's a big difference betweensomeone who is just sort of
showing up and flip flops andjust like winging it, and
someone who's actually, you know, preparing for there are known,
unknown, and I think that'ssort of like there's there's
actually some certainty aroundwhat we don't know and it's not
completely like.
There is a space of like, agamut of like, what might you
(01:08:26):
know what might transpire, butit's not completely unknowable.
Right, it's a, it's a set ofpotential outcomes and, based on
you know how fast thetechnology develops, based on
precisely how good it ends upbecoming at coding, how good it
ends up becoming at writing, um,but these are things that we
can make students aware of, thethings that they can pay
(01:08:46):
attention to.
And if you want to be a lawyer,like, keep tabs on how law
firms are implementing ai, likethat should definitely be
something you think about whenyou're about to apply to law
school.
It's not?
Chike Aguh (01:08:57):
yeah, and are and
are you going to be a doc review
lawyer versus a litigator,versus a pilot, versus?
I mean, the thing that I thinkabout is if you're talking to me
in that scenario, the analogythat I thought about was I
thought about Star Wars, I thinkabout the Empire Strikes Back
and again I'm dating myself.
But I think about there's thepart of the movie where Luke
Skywalker goes to the swampplanet to go find Yoda and Yoda
(01:09:19):
starts to train him as a Jedi.
And if you watch it, what'sreally interesting is that and I
never thought about this untilyou literally were just talking
you watch that sequence.
Yoda never swings a lightsaber.
He doesn't do any flips, hedoesn't do anything like that.
All he does is he just giveslike luke habits of mind, like I
remember when he tells him like, like, lift the rock.
He's like I'll try.
He's like there is no try.
You do or you do not.
(01:09:40):
There is no try.
Um, and in many ways I wouldargue again for that teacher
category.
That's what you're doing.
You're right, these kids mayknow this tool better than you.
You don't need to compete withthem there, but your job is to
how should you think about this?
And as much you know, likeevery middle schooler and
teenager who thinks they knoweverything, you kind of know you
(01:10:03):
don't.
Even though you act like you do, you kind of know that you
don't.
And having someone who can helpyou put structure on how you
should think about it and, evenmore importantly, someone to
help give you a values frameworkfor how to use these tools.
And I don't just mean theethical AI stuff, which is super
important, but I more thinkabout if you have a young person
(01:10:30):
who is interested in servingjust serving, broadly serving
their community.
I hope, when my kids are oldenough, that I can say well, how
could TragGPT help you do Xthing better?
How could you expand yourimpact with people that you are
serving using these tools?
I think that is a place that,again, teachers writ large could
(01:10:51):
in many ways help students andand also, candidly, I think
that's a a framework in ourminds that I wish the country
would take as we think aboutthese tools.
I I am fortunate to spend timewith folks who are thinking
about how do I use ai to findcancer markers for drugs or how
do I use um?
Uh, there's a great group ofstudents at Northeastern where I
(01:11:12):
have an appointment who areworking with the governor's
office.
They used AI to basically findintersections that don't have
road signs or stop signs ortraffic lights.
That should, because we reducetraffic deaths.
They've literally mapped thisout using these tools.
We have an AI for impact cohortusing students using this in
real time, by the way doingprojects at the state either
(01:11:33):
can't afford to do or doesn'thave the bandwidth to do.
That, I think, is the actualworld where there's the doom
world, where, again, where we,it's like the matrix.
But then there is the worldwhere not only are do we keep
our quality of life, butactually we are using these
tools to solve problems that wehave never been able to solve at
scale and at speed that we'venever seen before.
(01:11:55):
But that's a choice.
Alex Kotran (aiEDU) (01:11:58):
Yeah, and
to bring it all home.
First of all, thank you so muchfor coming by and I just want
to acknowledge how much betterdressed you are than me, and I'm
not going to ever try tocompete with your sartorial
skills, but nonetheless it'svery refreshing to have someone
who has been across all thedifferent vantage points in the
(01:12:22):
space that we're in, likeworkforce economics.
You're an educator on, you know, on the ground, like literally
in the front lines, but you alsounderstand the policy
policymaker perspective.
It's very reassuring thatyou're, you know, repeating back
a lot of these.
You know big picture, I think,like sort of instincts that that
(01:12:42):
we've had as as AIEDU in termsof how we talk about this.
I want to close by circling backto what you said about there is
no predetermined outcome.
It's not that we're on a traintrack that is the AI train track
and we don't know where we'reheading.
We're sort of charting a course.
This is why I'm so passionateabout the work of education,
(01:13:09):
because, to me, the only way weget to that beneficial outcome
where AI is being harnessed forthe betterment of humanity, it's
augmenting and empoweringworkers rather than replacing
them or, you know, succumbingthem to menial roles and
organizations know, uh,succumbing them to menial roles
(01:13:31):
and organizations.
Um, you know, education schoolsare where we build those
sensibilities and the mindsets.
Um, because I don't, I, I mean,we haven't had the chance to
talk about policy and government, but, um, my guess is that you
and I would probably agree thatgovernment alone can't through
policy or dictate.
Right, you know, guarantee us,you know anyone, they can create
some guardrails, they canorient.
They can provide resources, butat the end of the day, this is
(01:13:53):
actually very much going to begenerated by all the people who
are going to be behind the wheelof these technologies, and so
where else?
Are the front lines, if not inclassrooms and in those moments
where a parent is sitting nextto their kid and having a
conversation with them aboutGemini and what's actually
happening and creating not justthat curiosity but also
(01:14:16):
challenging them to think about.
How could you use this to helpyour community, to help your
Girl Scouts group?
Chike Aguh (01:14:26):
The last two things.
I'll say that maybe kind ofdovetail there.
I think one and it's a phrasethat President Biden used to use
, particularly when we were inthe beginning of the
administration, particularlyduring the pandemic days, and he
used to talk about in responseto the pandemic we're going to
take a whole of governmentapproach and I would always kind
of take that a step further andsay we need to solve a problem
like that or a problem like thisis a whole of society approach
(01:14:48):
to your point.
It will take all of useverywhere to to figure out how
to harness this.
I mean, the second thing I sayand this is particularly, I
think, to the educators this isgoing to take all of us and I.
And when I think about this, Ithink about a study that Ross
Chetty did.
Ross Chetty, brillianteconomist at Harvard.
Chetty did Raj Chetty,brilliant economist at Harvard.
He runs the OpportunityInsights Project and he has
(01:15:12):
access to the tax returns ofevery American since 1950 from
the IRS and it allows him to dothis amazing longitudinal survey
.
And he basically did thissurvey where he took a I'm
afraid how large the sample was,but he basically took a bunch
of students who took basically amath aptitude test when they
(01:15:33):
were in third grade and hebasically looked at it across
all racial demographics and he'slike I want to find all the
kids who were at least like twoor three standard deviations
above.
Then I want to see whathappened to them when they were
adults.
Then I want to see whathappened to them when they were
adults and again remember theyall tested two or three standard
deviations above the mean inthat math test when they were in
(01:15:54):
third grade.
And I want to say, particularlyin terms of the filing of
patents, I want to see how manyof them filed a patent.
And what you found was againsame aptitude when they were
eight.
Go 20 years later.
Students who were higher income, who were white, far more,
three times more likely to filea patent than folks who scored
(01:16:16):
the same but who wereAfrican-American, who were
Latino.
And he titles his paper theLost Einstein Paper and he said,
ok, I've now removed theaptitude question.
I've taken all people who haveroughly the same aptitude.
So that's not the explanationfor the different outcomes.
It can't be Something happenedmaybe in their lives, but most
(01:16:41):
likely something morestructurally, where these folks
did well and these folks didn't.
We can argue about that forever, but these folks who didn't do
well that was a huge lostopportunity for America.
Another book that I think aboutis the Idea Factory.
It's a great book on thehistory of Bell Labs.
If you look at Bell Labs wherethey created the transistor,
(01:17:02):
undersea internet cables whichis why we have the internet
today globally satellites, cellphones I can name you 10 other
things possibly the mostimportant non-government
research lab in the history ofthe country.
If you look at who worked theretens of thousands of people.
On one hand they were veryecumenical.
(01:17:24):
They actually made a habit ofgoing to small towns,
particularly in the Midwest andout West, to go find just like
oh, this is the smart kid in themath class we're going to,
we're going to help you go touniversity, get a PhD and come
here to bell labs.
But also, if you read the bookthe entire book literally, that
goes from the founding of belllabs in the early 20th century
to like the eighties no women,right 80s no women.
(01:17:53):
I can't think anyone who wasnot white and I don't say that
to say that to diminish thegreatness of Bell Labs.
Again, we wouldn't live in thesame country if we didn't have
Bell Labs.
They held them in radar, whichwas critical to winning World
War II.
How much greater could BellLabs have been if we had used
everybody?
And so, when I think about whywhat you do is so important is,
how do we not lose the Einsteins?
And part of it is making surethat those kids not just those
(01:18:17):
kids who are 213, but every kidhas access to these tools that
are going to change the world.