Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Spoiler alert (00:00):
AI can help
us stay connected in more
ways than you might think.
Hi lab mates, welcome to theSocial Learning Lab, a podcast
about social learning at work.
In today's episode, we're talking to MyraRoldan, technologist and futurist, about
how artificial intelligence can enhancesocial learning and foster deeper human
(00:21):
connections in corporate environments.
Myra innovates ahead of the curve.
She empowers businesses to demystify AIby guiding them through their initial
steps to integrate AI into their space.
And today, she will lead us from practicalapplications to innovative uses of AI.
Together, we will uncover actionableinsights that you can implement in your
(00:45):
organization to create more engagingand collaborative learning experiences.
Let's get started.
Hi, everyone.
How's everyone doing today?
Awesome.
Excited.
Great.
We are definitely excited to haveMyra here with us and gain some
knowledge about, you know, all thisstuff that's going on in the AI world.
(01:09):
And, you know, tie it to sociallearning, which is what we do here
at The Social Learning Podclass.
So, let's just get right into it.
Myra, how can AI enhance social learningwithin corporate training programs?
So when you think about I think forcorporations, you do have to define what
(01:29):
you mean by social learning, because it'snot something that everyone understands;
but AI could really play a role in theplatforms that you, create for social
learning where it could provide likethe prompt of the day or, you know,
pre, creating some kind of engagement,like, oh, here's a, here's a quiz for
(01:52):
everyone; let's like do this or let'shave, here's a case study we can discuss.
So, you can really have an agentwithin your social platform that
can actually drive some of thatengagement and keep people interested.
So, sharing, you know, a nugget a dayto do those nudges that are needed
(02:12):
in order to have like that effectivesocial learning, cause we all know
that unless someone's posting somethingthat everyone can react to, sometimes
the engagement is just like very low.
We have a social learning platform atAmazon that we use for educators and
I have to, you know, or my coworkerhas to personally like do a prompt in
(02:36):
there, but imagine having that automatedwhere it's just where can gauge where
everyone is, it can figure out likewhat's the chatter that's been going on
and then pose a question around that.
So, there's some reallygood potential for it.
And, I think that we just need to seebetter implementations of AI right now
(02:57):
in order to get people being able to putall these puzzle pieces together, right?
Like the puzzle piece of sociallearning, like what is it?
What's the impact?
The puzzle piece of, youknow, the generative AI and
how it can fit into things?
And I think that's where everyone'sstruggling right now to, like, kind
of put those little puzzle piecestogether to figure out what is the ROI.
(03:18):
What's the impact and how does it align?
How can I help you align toyour overall business strategy?
Yeah, that makes sense.
And, you know, I think it also, ittakes a lot of time too out of the whole
process that you would have to havesomeone specifically to like figure
out what are people talking about?
What are like things that youknow, we can post and say?
(03:40):
And, sometimes the engagements getslost because there's no one doing that.
So, it's just kind of like havingthat AI there doing that back,
you know, the gears running, whileeverybody else is doing their job.
I think that's going to really improvethat workplace engagement when it comes
to like employees interacting withwhatever you want them to interact with.
(04:00):
I would say.
Yeah.
I think that also like, you know,like people are still divided.
There's like this fear still ofAI; and, it like oh, no, it's
gonna take someone's job, right.
But what it's, what we really have tolook at it is like, it's not taking
someone's job, but it's taking off kindof like that heavy work off of someone
(04:21):
that is more repetitive and allows thatperson to reskill and something else and
really focus their creativity in areaswhere AI is not so great right now.
Talking about those tools and thoseplatforms, do you have any examples of
AI tools or applications that, you know,have successfully fostered collaboration
(04:43):
in the workplace at the top of your head?
Maybe
Right now, as far as fosteringcollaboration, like I'll tell you for
like for slack, you can create an AIbot that could drive some engagement.
But again, no one is actually, I haven'tseen a good implementation of it.
(05:05):
So, we're still really early stages,which is exciting that we're even
beginning to explore this, because thoseuse cases are starting to come out now.
So, I would say, soSlack has that ability.
Teams has that ability also, butagain, it's, it takes someone to
think about like, what is the processto actually, that we're trying
(05:26):
to automate to create engagement?
Like what's the goal, right?
For the employee group?
Is it communication?
Is it educational?
Is it like, what is it?
And then, being able to put somebrainpower into like this is how
we can leverage this specific,you know, like AI into this
specific area to drive engagement.
(05:49):
So, I think it's that critical thinkingpart that we're really good at, right,
to be able to figure out like and problemsolve for those specific situations.
So, I would say those are the core likeenterprise platforms that I've seen.
There are some like social platforms,like general social platforms,
(06:10):
like think about filters on Tiktok,right, which are my favorites.
And, just those adding those little tagson Tiktok that are AI driven based on
your content that you can add on videos.
So, from a pure social perspective,like I think all the social media
platforms have some kind of a way touse AI to engage, but, in an enterprise
(06:32):
setting, they are very far behind.
Once again, like every podcastthat we have, everything is always
like the purpose, like, what is thepurpose of what we're trying to do?
And then, once you figure out thatpurpose, then you can work with that.
So, I think that's the key heretoo, trying to figure out what do
you need it, what's the use, like,what are you trying to get at?
(06:53):
And then, you know, usingthat to create whatever it is
that you're trying to create.
Can I ask what may be a silly questionto tack on to what you just asked Myra?
So, you know, when I think ofsocial learning, it is like
see, imitate, or interact with.
And so like, is what Facebook originallycreated, where it would just start
(07:14):
to find friends for you, is that AI?
And then, you know, I'm seeing likenow mentor matchmaking platforms, where
they'll, you know, go through your giantenterprise organization and link people
together, is that AI enabled or is thatjust like simple tagging and algorithms?
Yeah.
So, believe it or not, a lot of theoriginal AI was just if-then statements.
(07:36):
If this happens, then thishappens, right, like triggers.
I think now we're seeing more purealgorithmic recommendation systems, right.
So, some platforms are usingthose recommendation systems that
are ML driven that are lookingat users at a whole, right.
So maybe, I think I've seen a mentorplatform where you had to like answer a
(07:58):
bunch of questions about yourself, andthen it would take all that data and
match you with someone that met someof the, you know, the criteria that you
had or the areas of focus that you have.
So I, I won't say that allmentor platforms that do
matchmaking are AI driven.
I've seen some where I, I wassurprised to find out that there
(08:19):
were people actually matching people.
That's kind of, I mean, that's cool too.
That's all that's, but
Less scalable, but...
Yeah, it's not scalable, right?
And, more than likely, it's therandomly selecting people, right?
So, which one will give youa better experience when
you're looking for a mentor?
You know, random is like, I don't know,50 50 chance that you're going to get
(08:40):
someone that actually align, you know,aligns to like what you're looking for
mentor for, whereas with AI, you canbe like, this is the specific problem
I'm trying to work on or improve.
And, it can actually match youto someone who is great at that.
So, yeah, I would say that youwould be surprised at the number of
enterprises that are so far behindright now; and the number of tools.
(09:05):
It's again, we are, we're at thetip of the iceberg right now.
There's so much that,that needs to happen.
The iceberg is just breaking throughthe water right now, you know.
Have you guys seen those videos whereicebergs just, kind of, like, come up from
the ocean or whatever, like chunks of ice?
That's where we're at right now.
It's like really slow.
(09:27):
And, we have people who are like, in thewater with floaties; we have people who
are on the edge, sticking their toes in;we have people who are on their car, in
their cars with like hats and sunblockand looking out at what's happening;
and then you have people that arebuilding sandcastles who are clueless to
everything that's going around, right.
I'm trying to figure out which one I am.
(09:47):
I know
yeah, me too.
I'm over here like with mythinking hat on like, um?
I think that's the experiment briefright there; someone, whoever, Rocio.
Yeah.
Yeah, like are you wearing floaties?
Are you on the edge with like,you know, your little rubber ducky
tube stick in your, what is it?
The flippers sticking your foot in thewater going up to your ankles only?
(10:11):
You know, are you in your car refusing toget out, because like you don't like sand
and you don't like water or you don't likethe beach, right, to look at the iceberg?
Or are you, you know someone who'sjust like down there building the, the
sandcastle like yeah, look at this,look, look what I made right like but
it's not really it's so far removed?
(10:33):
Yeah, it's so interesting
Maybe, I might be the dingbat wholicked the ice and is now stuck to it.
I'm thinking about like whereI am relative to the iceberg.
It's one factor is like theprivacy element and security.
So, I'm wondering if you could speakto that, like, again, going back to
what we were talking about with thesegreat applications with social learning,
(10:54):
like obviously it's digesting a lotof information about us and our teams.
So, how might organizations approachthose concerns about their team?
Yeah, this is a conversation that Ireally enjoy and it's just loaded,
because this is a lot of what i'vebeen doing consulting work around.
I've spoken to organizations andthey're like we need employee
training for generative AI; andthen, I ask them, "well, do you have
(11:16):
a policy for like how you're goingto allow generative AI to be used?"
And, the answer is always a big no.
" Oh, we're just experimenting.
It's fine.
We don't need policies.
It's fine" And then, I startto dig into like, well, do you
have you know data policies?
You know, do you have like guidelinesas to like how data should be handled to
(11:37):
protect your customers and protect your I.
P.
And then, they're like, oh, no.
And, I would say, from aenterprise perspective, you
need to have data policies.
You need to have those guidelines, right?
Whatever you're going to call it, you needto have guidelines for how AI is going
to be used; what AI can be used; and italso helps educate the business when we're
(11:59):
building those out to really understandlike what tools are really safe, what
tools really aren't safe, what tools, youknow, how do you do research on the tools?
How do you vet a tool?
How do you decide if atool is safe to use or not?
So, I had ran during TK this year.
I did a, it was just a talkand I introduced data poisoning
(12:22):
and prompt injections.
Things that no one had heard of.
And, I really try to keep these to aminimal, like I don't try to introduce
like everything, but we talked aboutdata poisoning and data poisoning is
where, you know, you use, it can besomething as simple as like using outdated
data, right, to put into a system.
You're poisoning that data, because nowit's going to start, the AI is going
(12:43):
to start referencing like stale data.
Data that's so out of date, that it'sgoing to create misinformation and we
know what happens with the misinformation.
It spreads very fast,you know, very quickly.
And then, with the prompt injections,I built afterwards an agent that I
shared and I got a couple people,right, where it's, you ask it to do
(13:03):
something; and it asks you for a sampleof your writing; and it asks you for
a topic; and then, it tells you like,"Oh, you're being hacked, right?"
So, when you think about the tools thatyou're using, especially at an enterprise
level, you need to read the terms of use;you need to understand what the security
posture is; you need to understand whatcountry the tools are being built in;
(13:25):
and you need to, I always recommend,recommend having a committee that will,
you know, review tools and say yes no.
It shouldn't be like one person.
I just feel that, that feels like adictatorship to my, to me like, you
know, it's one person say like no youcan't use that tool; but when you have
a committee, you know, there can besome debate and there can be like,
(13:46):
okay well, they can use it in thisspecific use case, right, because then
you have more bartering power, power.
But yeah, I think that one if youthink your data is private in today's
world, you are delulu, because ifyou have one of these right And
you walk around with one of these.
(14:07):
It's a smartphone, by theway, if you're listening.
Yes, sorry, smartphone.
Smartphone.
The mini computers, right, thatwe all carry in our pocket.
Your data is not private.
Your location isn't private.
Did you know that your smartphone,I'm going to share this, did
you know that your smartphonestake a picture every 30 seconds?
You can't turn that off.
It is a feature, again, so likethings that we you know, TIL, right,
(14:32):
today I learned, my smartphone istaking pictures every 30 seconds.
It doesn't store it.
It doesn't save it, but it is takingpictures every 30 seconds, right.
And, when you think about AI, Imean these have AI in them, our,
your smartphones have AI in them,like if how many, how many of you
have used siri or what's the one?
(14:53):
Semi intelligence.
Siri and me don't get along.
Yeah, no, she doesn't like me either.
And, if I have like Alexa'sin my home, I am I'm under no
delusion that my data is private.
I, I believe that there's a hundredAlmira Jaiman Fernandez Ortiz Ramos
out there with, you know, using mydata to get jobs, using my data to get
loans, using my data to like, you know.
(15:16):
I, I am under no delusion thatmy data is secure, just because
like we do online banking, youknow, everything is virtual now.
So, when you think about AI and the,and the data privacy, I think the
responsibility of, of an organizationis to ensure that they're handling the
data correctly; and that they're notusing customer data and public generative
(15:41):
AI tools like a chat gpt, right,where you're paying 20 bucks and then
you're feeding in generative AI data.
I think the other thing is that,I have a lot, i've seen a lot of
organizations that are using chat gpt.
They're allowing their people to usechat gpt, but it's like Myra has a
20 account, Nicole has a 20 account,Rocio has a 20 account, Katie and
(16:04):
Diego have, you know, and so we haveour individual personal accounts; and
then, we're taking company records,company information, and feeding it into
our personal generative AI accounts.
And, even though, OpenAI has changedtheir policy saying now that they will
not use your data to train their models.
(16:26):
They're still usingsomething from that data.
Do you know what I mean?
And, that data now is becoming public.
So, we need to think about that.
I always tell enterprises you shouldget an enterprise license of chat gpt
if you want it, if you want to use it,because then everyone in your organization
will sign out into that enterprise, youown that data, because the license for
(16:49):
Chat GPT on the personal $20 accountis like the user owns the data, right?
So like, I put in companyinformation, I own that company
information, not the company.
So, just some things to think about.
I feel like that was a rabbithole question there, Katie.
We could have a whole week ofpodclass to talk about all that stuff.
(17:10):
Yeah, that is one of, it's deep.
It's deep.
So
So, I guess we could start just make sureyou read the terms of use that we don't
read and we just accept for the most part.
I'm guilty of that.
I don't know if anybody else isguilty of that, but definitely guilty.
Would you say maybe like the takeawayfor me is like transparency about, right?
(17:31):
If you're going to use it ina social learning setting.
So, I'm going full circle now.
At the beginning Myra, you were like, "Oh,a great tool, which I love too, because
I've been thinking about that myself.
We do a lot of manuel promptingin like user matchmaking, right?
So, if AI is now introduced intothis community space and does that
for you, people have to know the AIis in there and reading everything
(17:52):
they do; and what happens when theAI starts prompting with your data?
Will you get in trouble if you saysomething, right, like all of that.
So, I think transparency is what peaksfor me in that whole conversation.
Yeah, definitely.
And, and again, if you're going touse a, an AI and a social learning,
it should be, and you're using it ina corporate setting, you should have a
corporate license, an enterprise license.
(18:14):
That's three good takeaways.
Check, check and check.
Bringing us back to something thatyou were mentioning just earlier, we
were talking about mentor matching.
And then, it reminded me, you know,sometimes when we do something that we
often recommend when we talk about sociallearning and onboarding, we talk about
how the power of having mentors as, youknow, part of your onboarding process.
(18:36):
So, that reminded me of that.
So, I wanted to ask you, like, whatrole do you see AI playing in the
onboarding process of a company?
But keeping that like human, you know,connection between employees where
they're still getting that socialinteraction with other employees or
other new hires, or do you see any?
I actually do.
(18:57):
So, think about like some of the,like onboarding can be overwhelming
from an employer perspective, right?
Like it's like, oh, whatdo they need to know?
What do they need to get access to?
What do they imagine beingable to automate that?
Right?
Like, you're like, this is a new hireand then the AI automatically creates
(19:17):
an email; automatically gives access tothe right tools based on the job role;
automatically makes a recommendationfor onboarding training, because
this is a training that we're using.
And then, that same AI does nudges, right?
Like where we chunk things down.
Like I, one of my things, I had thisconversation the other day, because
(19:39):
we're talking about creating an emailto send out that would send information,
and I was like it has to be drip, right?
And, I was talking aboutthis with the customer.
It has to be drip.
You can't just write one long emailhave a bunch of links in there and
expect someone to do with that, becausethey're not going to read the email;
(19:59):
they're going to click the first link;and then they're going to forget where
they where that email went, right?
They're gonna be like, "oh, I rememberthey sent me an email with a bunch
of links I didn't read it all andnow I need one of those links".
So, it's not the, if you automatedthat, right, and had AI write that
content and send it out at spacedtimes; so then, you're reducing
(20:22):
the cognitive load on someone,right, because think of onboarding.
Onboarding is likedrinking from a fire hose.
And, imagine if you told themlike, okay, so like you're new,
this is your first step, you know,these are this is your first step.
Did you complete that?
Okay.
This is the next step.
Are you done with that?
This is the next step, right, likegiving someone those baby steps,
(20:44):
so it doesn't feel so overwhelming,right, where you want to quit.
I remember my, listen I, I,i've been at Amazon seven years.
I remember my first day at Amazon.
I sat in this orientation with 200other people, and it was like, it
what, they did a good job, right?
It's like, you walked in, you tookyour picture, you walk somewhere else,
(21:07):
you got a lanyard, you walk somewhereelse, they gave you a temporary
badge, you walk somewhere else.
It was like a, you know, well oiledmachine, because we were in person, right.
You sat in this room for two hoursand listened to someone, then
your manager came and got you.
And, my manager never came toget me because they forgot,
you know, what I mean.
So, they needed the nudge.
(21:28):
Yeah, they needed the nudge,you know, what I mean?
Where now, onboarding is very different,you know, especially with remote people.
Even, if you're in person, it's like youstill have to have people dedicated to it.
But imagine being able to have likesomeone, welcome someone in, show them
to their desk, show them around right;and then maybe have like an AR kind of,
(21:52):
or like scavenger hunt; and then havethem go to their desk, do some training
then part of that training is like go andphysically do this other thing, but it's
AI tracking it and imagine being able totrack that data as they complete too some
of those other physical tasks, right.
The amount of analytics you could getand the amount of performance evaluation
(22:13):
you could do on someone right from theget go, like how long did it take them?
Oh, wait, they, they wereoff track, where'd they go?
You know, they didn't completeit, like you can see completion
rates and stuff like.
It would, I would, I havethis whole system in my mind.
It's a rabbit hole.
And, another rabbit hole.
And then, you know, how much youcan, with that analytics, how much
you can improve the experience?
(22:34):
Like it's constant improvement,because you're getting like
constant data fed into the system.
So, it's not, you know, a lot ofcompanies, maybe not now, but I
would, I don't want to say a lot ofcompanies, let me not generalize.
Companies in general have sometimesvery archaic ways of onboarding.
So I think that this is going to beinteresting to see how everything develops
(22:55):
Yeah.
But again, it's, it's helping theseorganizations figure out how to
put the puzzle pieces together.
And, we're still at the beginning likeyou say, like we haven't even reached
any kind of hype yet on that iceberg.
Talking about, you know,improvement on doing things better.
I think this is an important questionwhat are some pitfalls of using
(23:18):
AI and how can they be mitigated?
Ok, this is another rabbit hole for me.
We're going to have to have a parttwo, three, four, five, just like
a regular guest on the podcast.
So, I would say one of the main, the mainpitfalls of using AI is our dependency
and reliance that we are developing on AI.
(23:41):
So we, I've seen this a lot and I'veheard stories of it where people are
just like, "Oh, I used AI to createthis", but they did not read through it
to vet it, to make sure that the contentis okay; and then, there's something
in there that's questionable, right.
So, you have to have a human in the loop.
You have to have humanintervention for it.
(24:03):
There has to be someone that's reviewingthe outputs that the AI is providing.
Yes, it's good a lot of times andsometimes it's so, sometimes it
feels like, especially if you're notfamiliar with the topic, it may feel
like oh that's like legit, right?
But if you did some research,you may find out that it's wrong.
It's able to write in such a waythat's convincing that we become,
(24:26):
we're just taking it at face value.
And so, I think that's one of the majorpitfalls is that we're getting lazy
and the dependency on generative AI isgoing to create, I had this conversation
with, quickly with Dan Pink, where Iwas like, do you think that generative
AI is going to have a positive ornegative impact on wisdom over time.
(24:47):
I believe it's going to have a negativeimpact and that scares me, because
it just means that we're going to,we're not going to think critically.
We're going to just be like, "oh, it'sthe AI wrote it; it's, it's, it's right,
cause I read it on the internet," right.
I, I run, I read it on the interweb,so it has to be true, remember that?
(25:07):
Where we're like, you can't, youcan't rely on everything you read
on the, on the interwebs interwebs.
And, we still do.
We still do.
So, it's just going to getworse with generative AI.
And the other pitfall is likethe ethical, you know, in,
ethical bias, our history's bias.
(25:28):
You know, it's bias againstdifferent communities.
And so, that is going to have a directimpact into the AI systems that are
built that then we begin to use.
So, a great, I was talking toone of my friends last night.
She's an AI ethicist and we're talkingabout the use of AI for finance.
(25:48):
And, you know, we have thingslike historical redlining,
the homesteading act.
You have, you know, rules against women,because women could not get checking
accounts and bank accounts up untillike the 70s, you know, on our own.
So, that wasn't that long ago.
And so, all of that bias is fed into theseAI systems; and then those AI systems
(26:09):
are making decisions based on history.
It doesn't know any different, right?
It's saying historicallythis is what was done.
So, historically, you know, it's not it'snot going to try to change the system.
It's just going to do what it knows.
So, you have to have a human in the loop.
Yeah.
And then, just to thinkabout, like, who's using AI?
Who has the capability ofhaving those systems in place?
(26:31):
And, who isn't using AIor feeding it information?
Or so, that bias issomething that worries me.
Access.
Although, I'll tell you that OpenAI.
They kind of started to close thedigital divide a bit when they made
it available on phones, when they madeChatGPT available on phones, because
(26:52):
now you have people, like, you know, inPuerto Rico, everyone has a phone, right.
And, think about like Louisiana; thinkabout like Costa Rica and Colombia and,
you know, like Africa, people have phones.
So, they made, the thing is awarenessthat has to be a level of awareness
that this thing exists; and theyneed to figure out like how to
(27:15):
access it, so then they can use it.
So, that's the paradigm right therethat is, you know, what makes up
a lot of our digital divide too.
Totally unrelated to social
learning.
Yeah.
I'm going to be the derailer.
Rocio, if it's too far, youreally vacuum, but I guess you
were talking about the divide.
It's just interesting to me,cause we're this tiny team, right?
(27:37):
We're four core people,plus whoever we scale to.
And then, I think about enterprise,who has the money to then innovate
and buy all these tools and get theenterprise chat GPT account and get.
And so, I do wonder, you know, wethink about social, but then I'm
like, well, how does that impact itto be then due to small businesses
and small teams and to the small.
(27:59):
I guess what I'm trying to say is,does a small person, small team, small
community, small whatever lose out?
And, does it further push usinto this kind of big monopoly
version of life and business?
We're already in it.
We're in monopoly already, right?
How do we get off the board?
We don't.
Yeah, I think that you have tolook at it from the personas
(28:23):
that are using it, right?
So, you have your corporations andthen you have your individuals.
You have your small businesses, right?
So, let's say there's three personas:
the big corporation, there's the (28:28):
undefined
small to medium business, thenthere's the individual person, right?
They all have different needs.
The small business can't afford toget access to the same tools that
the enterprise can, right, becauseit's so expensive that we have to
frankenstein things together, likeI was thinking about that last night
(28:48):
because we were talking about Yeah,
Love to train an LLM, butit's not happening, right?
Yeah.
It's something as simple as getting a CRM.
You can't afford salesforce,because it's so auto.
It's so expensive that you have to lookat something a little bit different
Yeah.
That
may help.
That's
all AI powered now.
Yeah.
Yeah.
And so, and then, the, an individualdoesn't need a CRM, right?
(29:09):
They need something different.
So, the needs are differentfor each type of persona.
And so, when you think about likean individual, as long as they can
figure out, like, understand, like,hey, there's this thing out there.
This is how you access it.
Here's how you use it on a phonethat you have right in your
pocket, cause it works here.
(29:30):
There could be benefits from asocial learning perspective for that
person to learn language, right?
Like think about, I think aboutwhen I was learning english, right?
I thought I had good english.
I did not.
My english was not very good looking whenI first moved to the states, you know, and
just think if I had like a social learningplatform where I can engage with other
(29:52):
people; and there was some kind of AI thatwould give me like regular, like, I dunno,
like nudges, like, "hey, here's the wordof the day," and I can ask other people.
I was lucky enough to have a teacherwho we used to call dictionary Delmore,
because she would make us, your namewould go on the board and she would
make you write the first 25 wordsof the dictionary with definitions.
(30:12):
And, I wrote the entiredictionary with her.
So, that helped my language, butimagine having something like
that for social learning, right?
But not as a punishment, butmore as a way to drive learning.
Yeah, it's just, there's so much.
Yeah, and scaling your teacher, that meansyou write the dictionary every day, right?
Yeah, I, like..
(30:33):
If you think about access, we don'tall have access to that lovely woman.
Yeah, yeah, like, MerriamWebster Dictionary and I were
best friends in second grade.
And, yeah, Dictionary Delmoreand I were not best friends.
I think this is why we're friends,though, because I also read the
dictionary for fun as a kid.
I was weird.
I had to write the words with allthe definitions in the dictionary.
(30:56):
It was, I wrote A to Z, like, no joke.
Gotta see that notebook.
Oh,
That's a lot of writing.
That is a lot of writing.
So, as we wrap up our conversation,like Nicole said, let me just
bring us back to our topic.
(31:17):
Well, I guess that was on topic, justnot like social learning, but it's okay.
Myra, what is the one key piece ofadvice you would give businesses,
looking to leverage AI to enhance,you know, their social learning, human
connection in their organization?
Yeah.
I would say contract me.
(31:39):
Perfect.
I'm kidding.
I would say like, you need to start byhaving some guidelines for, how you,
and you would like, you're gonna allowAI to be used in your organization.
What tools will you allow?
And, that requires you bringing yourlegal department, your IT department,
your finance department, yourleadership team to really make those
(31:59):
decisions that then you can send outto employees, and say like, "hey, this
is" an, and have a way for employeesto be able to submit new tools that
they may be exploring, because youdon't want to stifle innovation, right?
So, you want them to, to be able touse tools safely and responsibly,
but you want to make sure that thosetools are vetted for security, for
(32:21):
privacy, for how your data is used.
You know, is your, is your corporatedata going to be put into the system and
then be available out to anyone when theywrite, when someone writes a prompt that's
like, write me a business plan similarto like, I don't know, OpenAI, right?
So, I can launch my new businessand it just spits out open
(32:42):
AI's business plan, right?
That's not, you don't want that.
So, you want to be able toprotect against that; and I think
it's also, educate yourselves.
Education is key in all of this.
Reading a few articles doesnot make you an expert.
Be careful with the type of expertsthat you bring into your organization.
Make sure that you vet them.
We have, i've seen a lot of people whohave become overnight experts where they
(33:05):
never did a thing with AI, and suddenly,you know, one year in they're an expert.
I am not knocking those people,what i'm saying is Generative
AI is new to everyone.
No one's, it's evolving so quicklythat it, no, it's hard to say
I'm an expert in anything, right?
I don't claim to be an expert.
(33:25):
I just have the advantage of like, Isleep, eat and drink this stuff, right?
And so, you want to make sure youhave people that are knowledgeable
and that are going to be able to helpyou look three to five years into the
future and not just in the right now.
I think to cap this off, Rocio, ifyou're on board with this, I would
love Myra to tell the story of thethree to five years that we heard
(33:47):
before we hit record to help everyoneunderstand just how fast things move.
So, let me give you an example.
Well, in- 20 I think it was 2019,so I had read, I had been reading
the papers, OpenAI I had written.
They wrote their first paper in 2018.
And, I submitted, I submitted toATDTK a session for machine learning.
(34:09):
They accepted it, because,you know, they were looking.
They always look for things that arenovel, which is why they're so great.
And so, I was like, in my head, I waslike, this is like, really important,
because I see this having a, an impacton, on us in the future and so it's we
need to like on board with this now.
So, I was really excited, youknow, I forget we were in San Jose.
(34:30):
I think it was and so it's when theystarted doing silent disco also, which
I fell in love with where you couldwear, because it's like, you know, you
had to wear like the little earbudsand you can tune into any session.
It was like really well done.
And, I remember running my session.
I ran two sessions.
So, this is, I ran the machinelearning and a design thinking, cause
(34:51):
I do design thinking for everything.
So, I ran machine learningand it was really interactive,
like I had people doing stuff.
And, I had 10 people attend my sessionand half of them were my friends,
because they were supporting me; buteveryone's like, "oh, this is great, but
I just don't know what I would do withit, like I don't I don't see the value
(35:12):
in it" or "I like it's great, but I,you know, we're not, this important".
And, fast forward, right, justa few years; and here we are.
I did, just did a session during ATDICE and had over 300 people in my room,
right, and now, I'm a rock star, suddenly.
It's a little like, surprisingto me, because this is stuff that
(35:33):
I've been talking about for years.
And, you know, I felt like chicken little,like the sky is falling, the sky is
falling, and everyone's like, no it's not.
Like, show us, you know, and so I reallyfeel like I love the chicken little guy,
because of that, because I, I alwaysfelt like that, like, you know, when
in the movie the Martians finally come?
(35:57):
Our martians are here, and it is,you know, chat GPT, and I'm like,
see, I was telling you, so...
The whole time.
The whole time, you justdidn't listen to me.
You didn't believe me.
And now, everybody's like runningaround trying to figure it out,
which is also seen in the movie,
but I definitely get that.
Well, Myra, thank you so muchfor being here with us today.
(36:20):
We definitely learned some things,created a checklist, like a mental
checklist of things that we shouldbe doing and shouldn't be doing.
So if, if people would like to learn moreabout your work, where can they find you?
The best place is LinkedIn.
You can find me underMyra Roldan on LinkedIn.
That is definitely the best place.
And then, my company website isundesto.ai and it's u n d e s t o . a
(36:47):
i and we're also launching an academy,in partnership UnDesto with Your
Instructional Designer we created the A.
I.
Academy and we're gonna be launching that.
That's gonna be a business to business.
So, I say everyone should keep their eyespeeled for that, because we are going to
be launching some amazing work together.
(37:07):
Yes, we're all very excited about that.
Once again, thank you for beinghere with us and sharing your
knowledge; and you are a rock star.
You've always been a rock star
Oh, thank you.
Go tell my mom that.
She won't believe you.
To wrap up our discussion on howAI can enrich collaboration and
employee engagement in the workplace,here is a quick recap of some
(37:31):
key takeaways from this episode.
AI enhances corporate sociallearning by providing personalized
engagement, driving participation,and aligning with business strategies.
AI can streamline onboardingprocesses by automating tasks and
reducing cognitive load on new hires.
(37:51):
Effective AI implementation inenterprises requires robust data policies
to prevent risks and misinformation.
It also demands comprehensiveguidelines and involvement of key
departments to ensure security,privacy, and continuous education.
Talk about these things with your team.
(38:12):
Keep these insights in mind as youexplore the potential of AI in your
organization's learning activities.
Now it's your turn.
For your experiment, choosea workplace tool that already
incorporates AI features, such asMicrosoft Teams, Slack, or Zoom.
Ideate on how one of these AIfeatures can be further integrated
(38:32):
into your workplace to improvesocial learning among your team.
Identify one specific AI functionor feature of the tool that you can
leverage to enhance communication,collaboration, or knowledge sharing.
Develop a quick action plan toimplement the use of it more
effectively to boost social learning.
(38:53):
Let's see how the tools we alreadyhave can bring us closer together.
You can find the full experimentbrief in the show notes or the Social
Learning Lab community on LinkedIn.
In the community, you can also shareyour stories, get feedback and insights
from peers, and comment on others ideas.
If you have enjoyed this episode, pleaselike, subscribe, or share so that we
(39:15):
can continue to build a supportivegroup of social learning enthusiasts.
Until next time!
Keep making learning that matters.