Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Chris Dellarocas (00:02):
Education,
education, education.
I mean it's very important andthat applies to everyone, right?
It applies to our students, tous, to every kind of worker.
We need to be educated aboutwhat AI can do and what are the
limitations.
JP Matychak (00:17):
On this episode of
the Insights at Questrom podcast
, we take a closer look at theimpact that AI could have on
organizational learning anddevelopment, including
personalized learning pathsComing up next.
Hello everyone and welcome backto another episode of the
(00:38):
Insights at Questrom podcast.
I'm JP Matychak.
In this episode, insights atQuestrom, contributor Shannon
Light sat down with ChrisDellarocas, associate provost
for digital learning andinnovation at Boston University,
and Richard C Shipley,professor of information systems
at the Questrom School ofBusiness, as well as DK Lee ,
Kelli Questrom associateprofessor in information systems
(01:00):
and computing and data science.
Chris and DK shared theirthoughts into the rise of
generative AI, its potential totransform employee learning and
development, and howorganizations can navigate
challenges responsibly whilestill leveraging the advantages
of AI-powered learning.
Here's Shannon Light.
Shannon Light (01:20):
Thank you both so
much for taking the time to be
here today.
DK Lee (01:23):
Our pleasure Great to be
here.
Shannon Light (01:27):
Chris, I really
enjoyed reading your article in
Harvard Business Review, whichwas published back in December,
discussing how generative AIcould accelerate employee
learning and development, and,based on both of your
backgrounds, I really wanted toexplore this topic a bit more,
and I figured it might be bestto begin by just discussing how
(01:51):
generative AI is transformingthe landscape of employee
learning and development.
Chris Dellarocas (01:58):
Sure I can
start the conversation.
I believe that generative AIwill really shake things up in
this space.
It's like having a super smartassistant that understands each
employee's unique needs andtailors the learning experience
just for them.
Traditional learning methods,you know they're more like a
(02:19):
one-size-fits-all approach, butwith generative AI it's more
personalized.
Imagine a platform that knowsexactly what skills you need to
work on and serves up contentthat's just right for you.
And there's more it keepseverything up to date.
In a lot of fields, knowledgeapproaches, laws change all the
time.
(02:39):
Learning materials can getoutdated pretty quickly, but
with AI, it's like having analways-on editor who keeps the
content fresh and relevantwithout any effort from us.
And lastly, ai can also be veryinteractive.
You can step in and offerguidance and answers, take
questions, clarify what peopledon't understand, and it's like
(03:03):
having a personal coach, makingthe whole learning process more
interactive and engaging.
So, in a natural generative AI,it can make learning more
personal, more up-to-date andmore hands-on, and so I think
it's going to really shakethings up in this space.
Shannon Light (03:21):
DK.
Do you have anything to addthere?
DK Lee (03:23):
Yeah.
So, chris, they're a great jobat summarizing a lot of
potentials.
So if I were to talk aboutsomething that's not been talked
about right now, what I'minterested in or very much
fascinated by, it's not here yet, but we have all these
text-to-image models.
Text text is the NLM, buttext-to-videos and all these
(03:47):
text-to-multimodal and I'm suresome folks are working on
text-to-the-simulated-worldmodels.
So imagine, like a Star Trek,the hologram.
But if you can generate thesevirtual reality where you can
(04:10):
then tailor it to, you can thinkof a situation where you can
teach employees or differentsituations.
If you're a firefighter or apolice and you have to go
through a bias training orsituational training instead of
it being actual, which might beslow and you need to make time
(04:35):
for If you can train a bunch ofthese models to generate this.
I don't know when that willcome, but that might be very
interesting.
Shannon Light (04:43):
That is really
interesting.
I know you just touched on it abit, but, chris, are there
real-world examples of howorganizations are currently
using Gen AI in learning anddevelopment?
Chris Dellarocas (05:00):
There are
examples about how organizations
could be using it for the mostpart.
Let me just give you some.
Suppose that you are amarketing and sales company.
You want to train your salesprofessionals in the latest and
greatest sales techniques.
You could use AI to scan thework history.
(05:21):
You can see what approaches andwhat techniques your people
each individual has used already.
Then you can tailor thetraining material into
specifically what they need,what they haven't used, what
they haven't used enough, whatthey need to know.
Similarly, if you want to trainyour programmers, you can
actually scan the code of rhythm.
Ai can scan the code and assesstheir proficiency in different
(05:46):
tools and languages, thenautomatically tailor the
training to their precise skillgaps and skill needs.
That's one way in which AI canpersonalize training.
Then again, let's consider afield where things change very
rapidly.
For example, environmental law.
(06:07):
There's a lot of regulationsthat change all the time.
You can have AI that adopts thetraining materials to make sure
that your employees are trainedon the latest set of
regulations and laws.
Or digital marketing.
We are there always newapproaches that become trendy
and maybe some other approachesthat become less effective as
the word changes.
(06:27):
Again, you can have AI that is,adopting the materials, so that
every time you train yourpeople, you make sure that they
get the latest and greatesttraining that corresponds to the
leading edge and state of theart approaches.
Shannon Light (06:45):
That's great.
Yeah, thank you for elaboratingon that.
It's interesting to think abouthow this technology can also
just help in general with.
Now, of course, the workforcehas changed with the pandemic
and working from home andtraining employees in different
(07:07):
settings.
I don't know if this issomething you have explored
further, but the experiencesthat may change with onboarding
employees and how Gen AI couldalso help in that sense.
I found it interesting beingsomebody who was onboarded as
(07:28):
fully during the time of apandemic.
It was really interesting andimpressive because, of course,
people were learning as we wenton.
But that's something that I'dlove to hear both of your
thoughts on.
DK Lee (07:47):
Yeah.
So if I can go, then I thinkone of the first real-world use
cases in companies was foronboarding.
A lot of times they had thesesharepoints or internal
Wikipedia or databases thatcompanies have been basically
keeping track of and recording.
(08:08):
Then you can train Gen AImodels to soak up that
information and then come up anddevise an initial onboarding
process for UVs and whatnot.
This has gotten another.
The next step that I think lastyear there were lots of
startups and companies trying toutilize this was like a
(08:31):
co-pilot right, like a lot ofcall service centers right.
First that deal was B2Cconsumer-facing firms.
When they're talking on thephone or chat, they have this
co-pilot like a Gen AI trainedhelper that instead of just
(08:54):
searching through this knowledgebase manually, the call center
person or helper person can justtype and get real-time answer
on the spot where it's therewith you co-pilot right Like a
Microsoft co-pilot and helpingyou solve issues real-time.
(09:15):
That was one of the firstexamples.
Chris Dellarocas (09:21):
But yeah, yeah
, I fully agree.
I think to me, the big promiseis just in time training.
Training which is integratedinto the daily work-forms, so
it's like having a learning bodywhich is right there as you
work.
So you don't traditionally mostcompanies they separate work
(09:42):
from training right?
Either you work or you go intotraining.
Now, with AI, you can actuallymerge the two, so you basically
like having somebody over yourshoulder and, as you do a task,
when the AI decides that youneed some help whether it is a
training document or a chatbotthat you can actually go and
(10:06):
interact or a drill I mean youcan just give it to you right
there.
Right, so this makes thingsefficient, makes things engaging
, and it's fantastic foronboarding.
I mean, you know it can reallymake onboarding and
transitioning to a new role waysmoother and faster.
So learning on the job, butwith a virtual safety net.
Shannon Light (10:29):
Absolutely.
I am curious, too, based onindustries.
I know that you discussed theuse of Gen AI in creating
immersive training simulations,but could you explain a bit more
about how it enhances thetraining experience in high
(10:54):
stakes professions?
Chris Dellarocas (11:00):
DK.
You want to take this.
DK Lee (11:02):
Oh, yeah, so, okay, yeah
.
So a lot of companies that havemaybe it's an involved process
where it's high as you mentioned, high stake.
If you make a mistake it'sgoing to cost the company
millions of dollars that theyhave.
I think it's also smallproportion, but like they used
to have these like a virtualreality training, where they
(11:24):
would use, I think, the Unityengine to actually come up with
this like a virtual reality andlike the employees can actually
go in and then use it on virtualreality headset like Oculus,
right, and then it used to takea lot of time to make this world
and the scenario.
It's like you know, it's likemaking a game, right, and they
(11:45):
use the game engine to do this,right.
So imagine a situation whereGen AI can just do this on the
fly.
Obviously, there may be somepolishing that needs to be done
professionally right after, whenyou can iterate and then
combinatorial explosion of alldifferent kinds of scenarios.
You can think about that, right, like you did the same.
It was just for, like afirefighter example, because
(12:08):
it's high stake, right, andmaybe there is a lot of like if
you're training a newfirefighter.
It used to be.
There are companies thatprovide these virtual reality
training world and games, so tospeak.
But that's limited by thedesigners and the game.
You know the game, the virtualreality designers and their time
(12:31):
.
They need to make thesescenarios and then render it all
and everything.
If this text to virtual realitykind of thing comes in and you
can just generate lots ofdifferent combinations, like
faster obviously it won't belike instant or quick work
whatever but it'll be now morepossible that you can go through
(12:52):
different scenarios.
That's what I meant.
Chris Dellarocas (12:57):
Yeah, I fully
agree.
I mean a lot of the currentgeneration simulations.
Even those that employ virtualreality, they fall short in
terms of variety in realism.
They usually are using a smallnumber of predetermined
scenarios.
This generally, as mycolleagues said, the space of
scenarios becomes exponential.
Shannon Light (13:17):
And it also makes
me want to bring up the point
of, of course, challenges thatare associated with the use of
AI, especially in learning anddevelopment.
This you know we hear a lotabout data privacy and potential
biases.
How can organizations navigatethese challenges responsibly
(13:42):
while also leveraging thebenefits of these AI-powered
learning tools?
Chris Dellarocas (13:49):
Oh, yeah, I
mean there's quite a lot of
challenges and, yeah, let's makeclear that the tremendous
promises, the tremendouschallenges as well.
I mean, first of all, the kindof data that are needed in order
for AI to deliver this.
Personalized training, you know, addressing skill gaps, et
(14:11):
cetera, et cetera, is highlysensitive.
It really is performance dataof employees and work products
of employees, so this needs tobe really safeguarded.
I mean companies that attemptto go down that path, they need
to be super careful about howthey handle this data.
They need to set up strong datagovernance policies and be very
(14:31):
, very clear with employeesabout how this data is used.
They need to be transparent andthey need to have, you know,
beefed up security and, ofcourse, definitely they need to
comply with data protection laws.
That's non-negotiable, I mean.
The next set of issues is bias.
Right, this is a tricky one.
I mean the thing with AI isthat it learns from the data
(14:52):
it's fed, and if that data hasbiases, the AI will have to.
So organizations need to bechecking AI systems, like giving
them a routine health checkupfor biases, and I think it's
important to have feedbackmechanisms where, you know, the
trainees can report any type ofsituation where they perceive
(15:13):
that content is biased, so thatthe system can self-correct.
It's a tricky one, of course.
We cannot just set AI loose andforget about it.
It works best when it's pairedwith human oversight.
So think of AI as a super smartsystem that still needs a bit
of human guidance, so that wayyou can catch any weirdness that
(15:36):
the AI might miss.
DK Lee (15:38):
I think we have more
issues than we can talk about in
this webinar, but I think thefirst thing might be on the user
side.
There might be overreliance ifpeople use it once and then they
get the result that they like acouple of times, and in fact
there has been documentedresearch on overreliance of
(16:00):
these general AI tools.
It's a little bit of a flu,despite its hallucination.
I mean, everybody knows aboutthe hallucination, but there was
a study comparing people'susage of chatGPT search versus
Google search and they ran abunch of experiments and then
(16:20):
they found out that, for example, obviously people spend less
time getting similar resultsusing chatGPT compared to Google
.
So in chatGPT we knowhallucinates and these users,
even though they were told aboutthe hallucination, they still
didn't care, like many, muchfraction of them.
(16:41):
So even if you tell them, theymight misuse it.
And the worst thing is, even ifyou're an expert, to go and
figure out whether this is ahallucination or not takes
additional time, which you mightnot do.
One anecdote is that I waswriting paper and I was just
(17:01):
talking with a particular LLM.
Actually, I use all three bigones chatGPT, bard and Anthropic
Claw 2.
And there was a book that Iread that I used in one of my
research it's the book onconcepts, actually and then I
was forgetting some facts andthen I was asking one of the
(17:24):
tool and it actually created avery realistic looking citation
by an existing researcher andthe concept looked really
interesting.
I'm like I don't remember this,what is this?
And then I saw something waswrong and I looked through the
book.
It's not there.
It was able to synthesize veryrealistic looking thing and say
(17:49):
it's like and present it like afact, and that's the danger.
But I actually had to spend alot of time going through the
book to make sure.
So that's one thing.
On the user side, hallucinationand people's unaware.
Many people might be unawareand over rely on that.
Another thing that I think wasit's amusing is there is a paper
(18:11):
by a colleague, asian Ward.
It's titled.
It's on PNAS, titled PeopleMistake the Internet's Knowledge
for their Own.
I'm talking about basically,when people use Google.
They feel intrinsically moreknowledgeable than they actually
are.
I'm probably butchering theresults and there are more
results there.
But thinking about this andcombining the paper that I just
(18:33):
mentioned, when people get usedto chat GPT, they might get even
more.
You know, have a sense ofinherent knowledge, because it's
just like things are at afingertips.
Even though it might like, theresult that you get from such a
team might be totallyhallucinated and false.
So you might have this theworld level Donnie Cougar effect
(18:56):
going on.
So unless you know, people arewarned enough times.
Chris Dellarocas (19:01):
So, so yeah, I
mean education, education,
education.
I mean it's very important andthat applies to everyone.
It applies to our students, tous, to every kind of worker.
We need to be educated aboutwhat AI can do and what are the
limitations.
Shannon Light (19:21):
Absolutely, and
to that point I'm interested to
see how this may play a role ineducation for students who have
these advanced technologiesright at their fingertips, how
(19:41):
it may affect the teaching thatgoes into the actual curriculum
and the integration, or if someprofessors moving forward may
not want to incorporate thetechnology.
I'm wondering from you both howdo you, how do these ideas
(20:05):
apply in traditional, in moretraditional education, and what
may be offered right here atBoston University's classroom
school business?
Chris Dellarocas (20:19):
Okay, maybe I
should take a first cut in a few
things.
So in my article I emphasizethe potential of GEN AI for
personalized content, foradapted and up-to-date content,
and for feedback andinteractivity.
I mean, all of these threeaspects can play a role in
traditional education.
So, for example, imagine ifyou're teaching a class in data
(20:44):
science and Z-Qual, or inhistory whatever in any topic
and then you use AI to give yourstudents personalized practice
questions every week that aretailored just for them.
Ai can analyze students'performance, interests and even
the preferred way of learning,and then it can customize the
practice question that thestudent is working on every week
(21:05):
.
JP Matychak (21:06):
It's like having a
personal academic advisor.
Chris Dellarocas (21:08):
You still need
the professor, but you actually
make it even better, right, youcan make the professor like a
personal private tutor.
I mean, the second thing is wecan use those tools to make sure
that our curriculum is freshand relevant and actually use
them every semester to make surethat, or at least to reduce the
(21:28):
effort of adapting.
With curriculum, things changeso rapidly in a lot of fields,
but the effort of keeping thingsup-to-date manually is just
staggering, and sometimes thingsmove faster than we can adapt
our materials.
I mean AI can help us stay ontop of things.
And then the third thing isfeedback and assessment.
I mean, just imagine one of ourlarge classes or large lecture
(21:52):
classes with 100, 200 students.
Right, it's very difficult forthe professional DTA to be able
to answer every student'squestion, whereas AI can
actually have a conversation.
Or you can have feedback.
You can grade assignments orprovide feedback.
You get your assignment back,grade it, whether by a human or
by a machine, but then you canactually ask questions.
(22:14):
You can say can you explainthis to me?
Why can't you tell me thiserror?
Can you provide another exampleso I can understand it better?
I mean this is a greatcompliment.
It can really allow us toimprove the quality of what we
give our students in thesupplement and enhance the value
that professors and teachingassistants give to them.
And, of course, anything we doaround AI educate students about
(22:38):
the technology, help themunderstand, become familiar with
it.
So we are preparing them forthe world that's coming.
DK Lee (22:44):
Yeah, I think Chris
covered all the great stuff.
I think one thing that I thinkwe need in this day and age, I
think not counting for thesetools is a mistake because
people will be using it anyway,and instead, I think one single
most important tool or skillthat we need to teach the
(23:07):
student is how to verify,confirm and validate.
That I think, if it's done welland you give them this tool,
once they're equipped with thisability to validate, verify and
confirm, I mean, and each andevery one of them could just
progress at their own speed,right.
(23:28):
So I think that's single mostimportant thing.
Chris Dellarocas (23:34):
Yeah, that's
the holy grail in education In a
way, the model of having agroup of people who move in
lockstep right At the beginningof every session.
We're all in the same place.
And at the end we're all in thesame place.
It's a fallacy.
Every person moves throughlearning in their own individual
pace, but so far it was not.
Practically and economically.
(23:55):
Physically, we essentiallyassign a personal tutor to
everybody from outside, but AIcan help us get close to that
Right personal tutor thatdoesn't judge and get tired.
Shannon Light (24:08):
It's great to
hear both of your perspectives
on this, as being, you know,researching this and really in
the weeds with everything.
Chris Dellarocas (24:18):
I mean I just
like to reiterate what DK said.
I mean, the promise istremendous, but the challenges
are also pretty substantial andI think the trick, especially in
education, is to introduce AIin a way that assists and
(24:39):
enhances learning and avoid theover reliance with it.
It's the classic case.
I mean, the first issue we hadwith gen AI is that students
will use it plagiarize, and it'snot so much.
In my opinion, plagiarism issecondary.
I mean, our role is primarilyto help students develop skills
(25:01):
and secondarily, to assess them.
I mean, what really worries meis that if learners rely on AI
in their own way, they will notdevelop the competencies they're
supposed to develop.
So the question is how do weadapt the way we teach them so
that we can introduce technology, we can reap the benefits and
we can still motivate and helpour students develop the
(25:24):
competencies that they come hereto develop?
DK Lee (25:29):
Yeah, just to elaborate
a bit and list out all the
problems that we have now, otherthan the hallucination problem.
How do you you know if you havea conflicting information?
How do you resolve that?
It's not obvious and that's ahard problem.
Another big issue is, once thisLLMs and gen AI has gotten sort
(25:55):
of mainstream, everybody isdoing everybody for data and
everybody is trying to protecttheir own data.
Data has gotten another meaningafter that.
So in that case, how do youthen, going forward, sustain
this such that you know so everyorganization, people in country
(26:22):
, whatever they, might be pushedto protect data and make
everything private, thinkingthis is the gold, right?
So the more and more thathappens, the future gen AI will
have less trading data to workwith.
That's another worrying thing.
(26:43):
You know, like all these newtimes sewing the gen AI or this
has been happening with theartists and authors, right?
Another thing is in the companyhow do you get like a
contributor, like that onecontractor that just makes
(27:03):
living by being the only personwho knows how to do X, y and Z
to give that up such thatthey're replaced now, right?
So these kind of likemotivation is another thing data
.
Where would that equilibrium beand how do you make that
sustainable is another big issue, I think.
Shannon Light (27:24):
All of that is
really interesting, and what
really stuck out is, of course,the data privacy and as that
becomes not as easily available.
Honestly, myself being amarketer, it's something that I
think about too.
You know we're using this datathat we have to help reach our
(27:48):
audience and help our clientsreach their audience.
So all of that is definitely abig question mark.
It's as we go and we see thesetechnologies progress.
It's super interesting, and I'mexcited to see how these are
integrated into learning anddevelopment.
(28:09):
To all your great points, Chris, that you brought up about the
processes, especially ineducation, and it's all.
It's all fascinating, truly.
So thank you, Thank you both.
Chris Dellarocas (28:22):
Glad to be
here, thank you.
JP Matychak (28:27):
Well, that's going
to wrap things up for this
episode of the Insights atQuestrum podcasts.
Thank you again to our guestsChris Delarocos and DK Lee, and
thank you to Insights atQuestrum contributor Shen and
Light.
Remember, for more informationon this episode, all of our
previous episodes and additionalinsights from Questrum faculty,
visit us at insightsbuedu.
(28:50):
So long.