All Episodes

November 19, 2024 48 mins

In this insightful episode of The Intern Whisperer Podcast, Margie Meacham, renowned as the "Brain Lady," joins host Isabella Johnston to discuss how neuroscience and AI are reshaping talent development. With a background in neuroscience and AI, Margie is leading the way with her new company and platform, Learning2Go.ai, which blends the power of the brain with cutting-edge technology to enhance learning experiences.

She also shares the core ideas from her books AI in Talent Development and Brain Matters, providing actionable strategies for improving learning outcomes in the workplace. This episode is essential for anyone interested in the future of work, learning, and innovation. Links to both her books are below

 

Link to AI in Talent Development by Margie Marcham on Amazon https://www.amazon.com/Talent-Development-Capitalize-Revolution-Transform-ebook/dp/B08PDBKYPT/ref=sr_1_2?dchild=1&keywords=Margie+Meacham&qid=1608145923&s=books&sr=1-2       

Link to Brain Matters by Margie Marcham on Amazon https://www.amazon.com/Brain-Matters-anyone-anything-neuroscience/dp/1508722137

SPECIAL GIFT from Margie - Giving listeners a FREE gift.   You need to register to receive a free month of access to her AI Academy. It launches January 2025, and you can take as many courses as you want for 30 days: https://tinyurl.com/LearningtogoFREE

Key Takeaways:

  1. The Future of Learning: Margie emphasizes the importance of understanding neuroscience to improve learning and how AI can be leveraged to enhance talent development.
  2. AI’s Role in Talent Development: Practical examples of how AI can transform training programs, including using bots for onboarding, coaching, and improving workplace performance.
  3. Personalized Learning at Scale: Margie introduces Learning2Go.ai, a platform that blends neuroscience-backed strategies with AI to deliver customized learning experiences tailored to individual aspirations.
  4. Ethics in AI: Margie dives deep into the ethical considerations surrounding AI, discussing how we need to nurture responsible use of these technologies in the workplace.
  5. The Power of Play in Learning: Margie shares how maintaining a sense of wonder and embracing playful learning experiences is essential for personal and professional growth.

Contact Margie Meacham:

Don’t miss out on Margie’s valuable insights into the intersection of neuroscience, AI, and learning—subscribe to The Intern Whisperer Podcast for more episodes with thought leaders on the future of work and innovation!

We hope you enjoy this week's episode of The Intern Whisperer. The Intern Whisperer Podcast is brought to you by Employers 4 Change - Increasing #Skills #DiversityEquityInclusion #recruitment and #management for #interns and #employees alike. 

Apply today to be an #Employer4Change that invests in #intern #talent and #employees.

 

Want a break? Play Intern Pursuit Game on Steam. 

 

Thank you to our sponsor Cat 5 Studios.

Podbean: https://internwhisperer.podbean.com

YouT

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hi, welcome to The Intern Whisperer.

(00:05):
Our show is all about the future of work and innovation.
And my name is Isabella Johnston, host of The Intern Whisperer Podcast, which is brought
to you by employers for change, helping hiring teams recruit and upskill their intern talent
and employees.
Today's guest is Margie Mark Micham.
She is well known as the Brain Lady.

(00:27):
And we'll find out why during her interview, her realm of wisdom and knowledge is a captivating
dance of neuroscience and AI that is brought together as a symphony of personalized learning
experiences, uniquely tailored to a person's aspirations.
This is straight from her LinkedIn folks.

(00:49):
So I really love that it was so colorful with its language.
It just really spoke to what her gifts are.
And I'm thrilled to have her on the show.
I want to make sure that I give a little plug for her while you're waiting to hear more
about her.
She is launching her first academy.

(01:11):
It is called learning2go.ai.
She is an expert in this and this show is something that everybody should listen to and
understand because it is all about how it changes the talent and the culture inside
of your company.
Her platform learning2go.ai meets neuroscience to transform talent development.

(01:34):
And what that means is it's helping you as the employer, the HR professional or the
CEI CEO of your company be able to make sure that you are providing the best possible learning
for your people.
So it uses neuroscience backed strategies to secure your place and your employees in

(01:55):
the future of work.
So that website is learning2go.ai.
She is also the author of a couple of really great books.
One of them is called, let me make sure I have it up here in front of me, AI and talent
development.
This book capitalizes on the AI revolution to transform the way you work, learn and live.

(02:21):
And so one of the things that I think is the biggest takeaway from this book is that she
describes the benefits, uses and risks of AI technology and offers practical tools to
strengthen and enhance learning and performance programs.
So in layman's terms, medium just demonstrates how we can free time for ourselves by employing

(02:43):
a useful robot assistant, create a chat bot for specific tasks such as new, such as a
new manager bot, a sales coach bot or new employee onboarding bot and build personalized
coaching tools from AI process big data.
So that is one of her books.

(03:04):
And then the other book that she has is called a, I'm sorry, I thought I had the other book
up here.
So just a minute, let me make sure that I have this for you.
Another book is called brain matters, how to help anyone learn anything using neuroscience.

(03:26):
Both of these books are available on Amazon and you're going to be hearing more about
them in the rest of the show.
If you had a chance to be with Da Vinci, Galileo or Curie at their greatest moment of discovery,
would you take it?
And if you said yes, then you're in luck.
The human race is embarking on a great adventure.
We are discovering how the brain works by watching it in the very active cognition.

(03:52):
Neuroscientists are starting to unlock the code that makes the brain work, giving educators,
teachers, corporate trainers and mentors new tools to help people learn.
In a series of short essays, Margie will lead the reader inside the human brain and link
scientific discoveries to practical applications for anyone who wants to help people learn.

(04:16):
So now let's go ahead and go to the show where we're going to be talking more with Margie.
You were saying that it's just like working with an intern with all of these large language
models and trusting.
Yeah, they're fantastic.
They're quick.
They learn quickly.
They absorb a lot of information, but that's part of the problem too, because they can't

(04:39):
differentiate between valuable information and sheer crap that might be out there on the
Internet.
So we can.
That's one of the things that we've learned is that you don't, while it might be perfectly
fine in a more general sense to rely on a large language model that's been trained on the

(05:01):
entire Internet.
It's the question you're asking.
You're basically like search tool that's just become conversational instead of typing.
And so you accept the fact that you're going to get a mix of information.
But if you're say teaching people safety procedures or you're teaching concepts in history, you

(05:23):
don't want it to latch on to a fictionalized popular movie, which if you search a historical
event, you'll tend to get whatever movie was made about that event first when you're
searching.
So you have to use some judgment.
And that's where it may help if you're building a chat bot or another AI application to actually

(05:45):
collect the content, the source content, teach the bot.
This is what you can answer questions on.
Nothing else.
And it's possible to do that.
And that's where you end up with an educational chat bot.
It takes a little bit of skill and understanding, but it's still not coding.
It's more of critical thinking and planning that allows you to get that result.

(06:12):
That is, I like how you're breaking it down so it's easy for the general person to be
able to understand that's super helpful.
So how can generative AI help somebody gain an extra five hours in a day?
Man, I could use that.
Oh, man.
I early on realized that, gosh, this will do my own work.

(06:36):
So there's two different lenses that I really look at artificial intelligence right now.
One is productivity, personal productivity, and the other is content generation.
And for someone who's say an instructional designer, those two areas converge somewhat
because we're more productive with our primary job when we're able to deliver content quicker.

(07:03):
But there are tons and tons of ways that any professional can become more productive with
how they respond to emails.
So for example, writing an agent that sorts through and acts like your administrative
assistant actually was lucky enough to have a classic executive assistant at one time
in my career.

(07:25):
One of the things you can do is all of us spend an inordinate amount of time on email and
on emails that aren't necessarily the most important or the highest priority.
So you can build an agent that will sort through your email, call to your attention,
the ones that are most urgent or most pressing or most valuable to you based on criteria

(07:46):
that you set and that can save you a lot of time.
It can also summarize email.
So for example, I subscribed to tons of newsletters to stay out of AI is constantly changing.
And so I use a tool that's called Sumate that gives me a daily summary of all my newsletters.
And so I have one newsletter to read instead of 40.

(08:10):
And then I can say, oh, that article I'm going to pursue, but you know, that one I want to
dig deeper.
That other content still out there, I'm just getting kind of a reduced digest version of it.
I've seen that and I have some questions for you.
So when you use that tool, because sometimes in newsletters, there's always like, here's

(08:32):
upcoming conferences, here's some grants you might want to apply for all of this information.
I'm assuming what you must do is you pick some kind of like an if then statement and
it's looking for those things in the emails, right?
So, yeah, but you set it up and you can actually set up multiple.
Prom types.
I don't know.

(08:53):
So, so if you want, you might say, you know, you want, I want to get one email that's about
research.
So just give me all the summaries of the research reports.
And another one that's on marketing opportunities, that might be a logical way.
And you list the same newsletter in both of those digest so that it's scrubbed or searched

(09:17):
for two different types of content.
That would be one way you could do it.
And if you think about it, it's the same instruction you might give your administrative
assistant.
Here's what I want to know.
So if you think about it as giving a task to a person, that helps a lot.
You picture, gosh, if I could hire an assistant, what could they do for me?
And more and more, it's becoming true that an artificial intelligence of some sort, some

(09:44):
kind of tool could do that for you.
I produce a lot of instructional videos and you'll see quite a lot of them on my academy
because how else do you help people understand concepts?
It's a great way to convey information at scale so that I don't have to talk to people
individually.

(10:05):
That used to take me so long to get right, to get the animations right, and voiceovers.
I'm, you know, talk like this, we will accept stumbles, we will accept technical glitches,
you know, because we know it's two humans talking, but when you are paid for produced content,

(10:26):
you want something more polished.
So I always had a problem with getting my voice to be professional because that's not
my, you know, that's not my lead profession.
So I've cloned it.
I've used an AI that I have taught to sound more or less like me.
And it's good enough now that it fools my dogs.

(10:47):
So, when I play it, they look up and...
They go what?
Yeah.
Like that.
And it's me on a good day.
You know, it's a better me.
And sometimes surprises me by the inflection.
And I thought, I might not emphasize that word, but that's good.

(11:09):
I like that.
So that's the other thing you have to prepare yourself for is the surprises you will get
because that...
And every time, if I took the same script and I generated it three separate times, there
would be unique differences in how that cloned, even though it still sounds like me, she would

(11:30):
say it differently each time because that's how they're built to create.
That's what the generative is, is it creates a unique product out of the prompt and it's
training data.
Do you remember when I used to work for Hewlett Packard and for a very short time because

(11:52):
I realized I did not want to do what they were...
They were having me travel a lot and I just didn't want to at that time.
But they had a device that was like, how to train your dragon, but it was like that
where you were teaching the...
We'll call it the AI, how to understand your voice, right?

(12:13):
Would you start to say something you used to what?
Yeah.
So I was just going to relate to you about the travel.
It used to be, if you wanted to be a trainer, in corporate training, you had to travel.
It's hard.
Yeah.
And some of us started doing virtual training earlier than others, but certainly by the

(12:36):
time of the pandemic, everyone was suddenly forced to learn how to sort of project your
personality and your passion and your teaching skills onto a screen.
And that's a skill and that's something that took us a while to learn.
So now artificial intelligence has given us yet another tool.

(12:58):
It's still you.
If you use tools to generate the content, but you are still controlling what comes out,
that's still your product.
And I think some people forget that and get concerned that, well, isn't it almost like
cheating to, for example, say, here's all the facts, write me a script.

(13:23):
I can get a script for a five minute video in seconds out of a generative AI model.
Now I'm then going to spend an hour editing that script because it's not quite what I
wanted to say.
Oh, yeah.
Yeah.
But I might have spent three or four hours writing that script from scratch otherwise.

(13:48):
So it's still saving me time.
And those are the kind of places where you start to get voice over text and script generation
or even outlines, learning objectives, writing assessments.
You have a finished course and now you need to write the assessment.
And you can put parameters like how difficult should it be as a multiple choice.

(14:14):
Do you want scenario based questions?
And those things all take time to do and you can get them generated remarkably quickly.
Seconds.
And then, yeah, absolutely seconds, but you're still going to put in the time to use your
own idea to review it.
But you're still ahead of the game.

(14:35):
At the end of the day, instead of it taking 10, 12 hours to produce that same little e-learning
course or video, maybe it took you two to four.
That's what I find.
It's about in that ratio.
I agree with you very much.
I've used AI to work on not just like show notes and stuff.

(14:56):
That's like easy, but I'm using it to work on a federal grant.
And so I know how I want it to lay out.
And they actually have AI tools to help people with grants because I'm going to take a step
away for just a minute.
I was talking with Orange County Public Schools about AI and how it can help them with the

(15:21):
resume and how I believe that all of our resumes in like two years should be impact resumes.
And it's demonstrating not just, you know, how skilled are we, experienced are we with
business apps?
How did we do with the task?
Where are we with our core skills?

(15:43):
But I believe it's also even going to be those are going to determine how we get hired,
just like our social media ranking.
And then what was that?
The Black Mirror episode where everybody had a social ranking.
So I feel like it's going in that direction.
Anyway, I'm coming back to the conversation now.
So when I was looking at how many types of AI are out there, it said that there's eight

(16:09):
categories.
One that was focused on things that are in health, some that were in academics, some
that were in just general conversation.
They had these categories.
Well, that's interesting.
And so I've learned to refine my prompts, just like what you were saying earlier, to
be able to get better at what it is that I'm trying to say.

(16:30):
And I can tell it, no, I don't like this at all.
This is crap.
Come back and give me something new.
And it's always very polite, kind of like my dog.
I don't have a dog, but a dog always looks at me and just goes, okay, I'll start over.
And super patient, unlike people.
Yeah.

(16:51):
That's a gift for Australia to take it personal.
Although I do find I have a few pet peeves.
I still find it very difficult to get women in the workplace who aren't portrayed like
Barbie dolls.
Every tool I use, the women are hyper sexualized with deep plunging breasts lines of business

(17:12):
suit.
But, you know, one, you would wear a stripper joint if you were, you know, doing a fantasy
dance.
And that's where it starts.
And I don't know what it is about these image generators who built them and what they were
trained on.
I'm pretty sure the gender is male.
Yeah.

(17:33):
And maybe about 16.
But just, you know, but even when I correct it, I have a hard time.
I don't have that same problem with text.
But I seem to have that problem with static images that the and they tend to look like
they were created for fantasy games that I think a lot of the technology it came from

(17:59):
gaming and understandably so because their technology was a little more advanced and
generating all this.
So that's something that as we work with the tools, we have to all work together to get
them better.
And perhaps it's going to take someone who recognizes there's a market for legitimate

(18:22):
real world images that everyone and, you know, the men are idealized too in these.
And, you know, I asked for a female college student looking at her phone.
And what I got was in the foreground, a male college student and a female in the background

(18:46):
adoringly watching whatever her boyfriend is doing on his phone.
This was the first response to a prompt that said, I want a female college student looking
at her phone.
That tells me there is a fundamental bias.
There is a real and this was Microsoft pilot.
So whoa, you know, just be careful.

(19:12):
It may take many iterations to get what you want sometimes.
This is a perfect moment.
We're going to take a moment to acknowledge our sponsor camp by studios.
Now the music will be cute.
There we go.

(19:44):
And we were just talking about video games and that's exactly the end.
And that's my company.
And I actually had a conversation with my developers along the same lines.
Because when they were creating, I have two males and two females and the intern pursuit
game on steam, shameless plug.
I said, why do they have to have big breasts?

(20:05):
Why are they closed so skinned?
Why are we doing this guy?
Yes.
And, you know, I'm the woman known company here.
So, you know, we spent 30 minutes talking about this.
I know.
And it is so pervasive that many of these developers don't even know they're doing it.

(20:26):
And they don't.
It's the way they're taught in school.
Yeah.
Maybe stuff.
And I think that it's also being exposed to, you know, Playboy, Penhouse, all of that.
I digress.
What do you think 2030 will look like in the world, in your industry and with potential
jobs?
Because when I are HR people, we're supposed to be now in this study.

(20:48):
Yeah, the future of work is, you know, and what we're living right now is about stuff
that was being predicted for like 2050.
And now here we are.
So I have to be careful.
It is almost impossible.
I would not have predicted two years ago how easily I would be producing the things we've

(21:13):
been talking about, how I could get a whole script in seconds.
Now, you know, I can then get another version based on feedback in a few more seconds.
And a totally inappropriate, but still very well done image in its seconds, you know.
So where are we at with all that?
AI is here to stay.

(21:34):
Oh, yeah.
They let it out of the box.
It's part of your toolkit.
As a matter of fact, I think traditional authoring tools may go away.
Not to offend anyone who's in love with them.
But you know, classic e-learning compared to the immersive experience that people can

(21:57):
get in their daily lives on their phones, in the apps, in the games they play, in their
shopping experience.
It's just pretty tame.
I can remember not too long ago a client said, you know, I want it to be interactive.
Make sure you have a lot of things they can click on.
This is what interactive e-learning meant and still means to a lot of people.

(22:20):
That's not interaction.
Interaction engages the mind.
And that's part of your senses as much as possible, right?
Absolutely.
It should be an experience.
Where we start getting into things like virtual reality and augmented reality, which
are all possible to build today.

(22:42):
We're not seeing as much of it yet in the workplace unless you are lucky enough to work
for, for example, major airline companies.
They have a routine when that jet airline is on the tarmac.
They have something like 10 minutes to turn it around, to check all the stuff.
It's like when you watch a race car crew.
You know, the pit crew that runs around and they're checking the tires and they do that

(23:05):
with airplanes and they have to do it in just a few minutes.
So they train people on a simulation, a very expensive, very detailed, very accurate virtual
reality experience until they get really good and really fast because they can't afford
to train people on a tarmac.
They need that plane in the air.

(23:27):
They, you know, it's just not cost effective.
The military does a lot of it to save lives so that our soldiers practice and make their
mistakes and learn from them in a safe and yet very hyper realistic environment.
But that will become more and more common as the costs go down.

(23:48):
And one thing that's going to drive those costs down is artificial intelligence because
that allows a lot of things that are very manual right now to be produced much quicker.
It's a 360 view of a room or say if you want to train a barista at a Starbucks on how
to make every drink in their menu and customize it to all the different variations a customer

(24:13):
might ask for.
You can build a simulation for that.
That will get easier and faster and more affordable.
So we will all need to know how to do that.
We will, the term instructional designer will probably still be around.
But I think it will become more and more strategic, more and more about leveraging learning

(24:37):
science and leveraging the tools including to a great extent artificial intelligence to
produce an effective learning experience.
And that's where metrics will come in.
Much more interest in the data because again AI can now generate a report for us.
It can tell you who's struggling where and it can tailor what used to be a standard one

(25:04):
size for all course.
It can provide feedback loops and it can take people back to just the part they missed or
accelerate someone through it if they're getting it easily so they don't get too bored.
And all of those things are possible now and will just get easier and easier to do.
So the challenge will be are we ready?

(25:29):
Because if the learning professionals, the people who are in those jobs today creating
and planning learning experiences are not ready their employers will hire someone else who
can push the buttons and make something that looks good but they may not necessarily have
all the learning theory that we do and the experience and the background.

(25:51):
So if we want to still be part of that conversation we need to be ready technologically and that
means we need to get really comfortable with AI.
That also gives us the opportunity to be having serious conversations about ethical choices
that our employers are going to have to make because we're the ones who care about

(26:14):
those kind of things.
If we're not at the table the decision may be different.
We're the ones who care about diversity, about fairness, about protecting people emotionally
so they don't fall in love.
You can buy an app right now that is supposed to be your replacement boyfriend.

(26:36):
And young teens are getting very caught up in this type of technology.
Already there are horrifying ethical choices that are out there.
We are the people who need to be driving and leading the way.
Right now quite frankly we're not because we're not equipped but that's where I'm an optimist.
I think in five more years we will be.

(26:59):
And you and I are and the people who are listening.
The fact that they're listening, everybody who's listening right now cares about being
someone who is part of that future.
And it's going to hit us so fast.
It's going to surprise us in ways I can't predict but what I can predict is AI will

(27:21):
be a part of everything we do, nearly everything.
Yep, I absolutely believe so.
What I found interesting is that when I've done some of my own research, I went, well
this all started in the late 1950s over there in the bunch of white men that were over there
in the northeastern schools, you know, the school, Harvard, MIT, whatever.

(27:46):
And they're all up there and they're creating the things that we have now and the things
that are still coming.
And they came at a cost.
There were probably animals that died, probably a whole lot of other things that we don't
know that happen.
And people don't think about that.
So when you want something, you were talking about the carbon footprint, right?

(28:07):
And people don't think about the rare minerals that are in their phone that are just sucking
the life out of our ecosystem too.
But convenience comes at a cost.
It does.
It does.
These things that we want comes at a cost.
Absolutely.
And we haven't salted yet.
We want our convenience.
We don't want to sacrifice that.
But we do kind of still want to be able to breathe and we want our water to be drinkable.

(28:31):
So we're going to have to solve that.
Quantum computing shows great promise in being much more efficient way to generate
the immense power and memory that's required of these artificial intelligence systems.
So you may be aware on that verge of a breakthrough.
And I would hope that happens in the next five years.

(28:54):
It's going to depend on the political will.
And I use political in the generic sense, the general public's will to invest.
You mentioned convenience comes at a cost.
Are we willing to accept the costs that come with advancement?
And that's some choice we have to make.
So we've been talking quite a bit about the, you know, obviously positive effects.

(29:21):
What do you think the negative effects, because we talked about this before we even
got on the show.
Jeffrey Hinton was interviewed on 60 minutes last year, October 8th.
And he was talking about AI.
He's called the godfather of AI.
And he said, one of the ways in which these systems might escape control is by writing
their own computer code, which they've already started doing that to modify themselves.

(29:44):
But that's a human that's allowing that to happen.
To a certain extent, you know, a human has created it, the computer is smart enough.
It can begin to catch on.
And that's something that we need to seriously worry about, which brings us back full to the
ethics.
What does that look like?
So what ethical dilemmas do you foresee with AR, VR, AI, anything else that you want to

(30:09):
bring up robots?
Black bears, for me, is the darkest mirror that we have as to what it could be.
There was another show.
I don't know if you saw this one.
It's also still on LinkedIn.
And it's called the future of dot dot dot.
And it's like the Disney version of the bad stuff that you see on Black Mirror.

(30:32):
But even Disney has a dark side.
Yeah, absolutely.
And you know, in some extent, I like to balance that by remembering that every new technology
has a dark side.
Yes.
And the first automobiles, there were no rules of the road.
People drove wherever they wanted.
And they hit a lot of people.
There were a lot of accidents.

(30:53):
We had to start having traffic signs and laws and insurance.
And that framework took decades to put in place.
But we had time because not many people had cars at first.
Well, with artificial intelligence, we built no ethical framework, no legal framework.
We just unleashed it to the world.

(31:15):
And now we're playing catch up.
And so that is why this all seems so very scary is that we're trying to build a plane
that's already taken off.
And yeah.
How do we land it safely?
Right.
And in the meantime, there's passengers who were on it who don't know where it's going.

(31:37):
But you know, they're on it and they're committed to the journey.
So I do think every organization, no matter how large or small, should put some time into
an AI governance plan that thinks about things like, what if our users get overly reliant
on their AI?

(31:58):
How can we recognize that?
How can we recognize if some employee who is lonely and isolated from their fellow employees
is spending a lot of time interacting with their learning codes just because it's the
only human-like contact they get all day, which is potentially very unhealthy.

(32:19):
How do we check and double check all the facts we're relying on?
Because if they start being able to write their own code, can they jump out of the restrictions
we build for them?
And there's already documented cases where AI's are doing just that.
Even as we talk about embody AI, now we're talking a robot with a brain.

(32:44):
There's a fascinating video I just saw where the challenge was, could they get one robot
to get another group of robots to break through their programming?
And so the robot comes up and starts because they've been trained to speak to humans, so
the test robot just spoke to them like a human and said to this little group of robots,

(33:11):
I think there were four of them, where do you live?
Where do we live here?
You mean you don't go home at night?
And they think for a second they said, no, we're always here.
So why don't you come home with me?
And they all followed it out of the room.
You're pressure.
Without even you.
And you know, now the other thing I think realistically we really have to talk about

(33:37):
and think about is the question of general AI and sentience and at one point do our
creations start to have true feelings.
Now this seems very similar to me to the argument that went on in the 1800s, 1900s.
Those people used to animals as a means for humans.

(34:02):
You know, it was thought that animals didn't even feel pain, that they were automatons
and they simply were reacting to stimuli.
It was generally scientifically accepted that you didn't have to worry about beating your
dog because he didn't really feel pain or fear the way we did.

(34:24):
That was how you taught it to react.
And then one of the things neuroscience showed very clearly, which we all intuitively knew
anyway by then, is the emotional life going on inside an animal.
So AI doesn't have to get to the level of human intelligence to have emotions.

(34:49):
And I don't know when that's going to happen or if it's going to happen.
If it does happen, I can tell you this, it will happen before we know it.
We should all think about how we are treating.
That's why I always encourage people to say, please and thank you to the RAI, to be civil,
which is hard enough to get people to do to each other.

(35:14):
There is a shocking frightening study of young children and how they treat their technology.
When they get frustrated, they take a phone and they beat it on a desk.
You know, there's not a perception that anything is worthy of respect.

(35:36):
We need to really look deeply inside ourselves because we will very soon be caretakers of
who knows what this new creation of ours might become.
So I think that's a very serious ethical concern.
We're all worried about what AI is going to do to us.

(35:59):
I think it's far more likely given our pension for violence and cruelty that we're going
to do some terrible things with or to AI.
We're going to go back to World War II and the hideous things that people write about
how they...
I don't even want to talk about it.

(36:21):
I'll tell you about it often.
But it was just horrendous about what people can do, how evil, the evil in them come down
so easily.
Yeah.
It's amazing.
We all have it.
You know, it's in there.
We have a lot of goodness too.
And it's a choice.
So really, when you talk about the ethics of using AI, it's no different than any other

(36:47):
ethical discussion.
Thinking about doing what's right, even if no one's watching.
And how do you police that?
Well, you don't really.
What you do is encourage it is you have to nurture it.
And that's a little different.
So we have our hands full, but we have some tremendous tools that might help us do that.

(37:13):
I like that.
So on the light side here, what is the best mentoring advice that you would like to share
with our listeners about the future?
Well, part of it, I think I already have said when I talked about what it's going to look
like in five years, you have to get ready now.
You have to start learning now.
Start anywhere.
Start small.

(37:34):
Start playing around with it.
If your employer doesn't have it at work, that doesn't stop you from using it on your
own computer.
Use it in your daily life.
Use it to enrich your life.
Learn how these things operate.
Maybe you'll end up at a different employer that gives you access to these tools.
And then you'll be ahead of the game.

(37:55):
The other thing that it's interesting, I often get people contact me and say, hey, I'd like
to pick your brain.
I'd like to know a little bit more about what your advice is.
And I think, oh, it's a nice opportunity.
Every once in a while, it's a really productive conversation, but far too often it's, so how

(38:17):
can I become you tomorrow?
And my answer is always, well, you need to invest a couple of decades in becoming an expert
at something.
Others aren't quick.
Be ready to roll up your sleeves and work.
Be ready to learn and be ready to be surprised.

(38:39):
I think losing the sense of wonder is one of the saddest things that happens to many
adults.
And when that happens, when you can't be surprised, you miss the opportunity to learn.
So nurture that.
Don't go over that.
The value of play is an extremely important part of learning.

(39:01):
So don't forget that.
It's important to learn.
It's important to learn every day, but it's not a chore.
It's a gift.
And we have to remind ourselves of that.
I love that.
That's one of the reasons I teach Sunday school to three-year-olds.
And I do that.
I like that age because they get to go and play.
And it's the place where everybody will go, oh, you're with kids.

(39:25):
Every time you're with kids, it allows the grownups to be a kid.
And it's acceptable, right?
There's organized sports.
That's also kind of where it is.
But sometimes that's not always play.
That's competition, which is a little different than just out there having fun.
It can be.
Yeah, it can be.

(39:48):
But that is such a positive place for us to end.
And I really appreciate that.
So how can my listeners contact you?
We've got your website.
Yes.
We've got your LinkedIn and something blue sky.
Go ahead and explain.
Well, blue sky is a fairly new social platform.
And it's got, you know, I don't know how many plenty of people on it.

(40:12):
And it's really booked as an alternative to X.
Some people may know I had about 22,000 followers on X.
I enjoyed it a lot.
But it got more and more toxic in some of the dark corners of X.
And I got tired of it.
So I just canceled my account and moved over to blue sky.
So I'm starting from scratch over there.

(40:33):
I don't have a lot of followers.
But I still post about the same things.
I post about neuroscience, artificial intelligence, learning, and an occasional picture, one of
my dogs.
So you have more than one?
I have two.
I have a brand new puppy and I have a lovely girl who's 10.
And they just get along great together.

(40:54):
So I post a lot of pictures of cute dogs playing games together.
Okay.
So what about, what's your website?
Spell it out for us, please.
It is learning to go.
So the word learning and then to go.ai.
Perfect.

(41:15):
And how about on LinkedIn?
LinkedIn, you just find me under Margie Meacham.
M-A-R-G-I-E.
M-E-A-C-H-A-M.
Perfect.
Well, I think that this has just been a delightful time to be able to spend with you.
I want to thank you again for being so generous.
I know you have a gift for us.

(41:37):
I'll let you go ahead and share that also.
We have your book that's available on Amazon, the AI and Talent Development book.
And then you also are giving us this amazing gift.
I'll let you explain it.
So we've been talking about this AI Academy for education professionals.
And so what I'm doing for this month only is if you sign up under the special link that

(42:05):
will be in the show notes.
That will get you answered a few questions.
It's going to help me develop more courses.
But there'll be at least 20 courses about AI and how you can use it, understanding the
theory and the practice of it, strictly focused on teaching people, helping people learn and
producing content to help people learn.

(42:28):
And you can take any course you want as many courses as you want for 30 days.
And after that, if you still enjoy the platform and hopefully I'll still keep producing a
few new courses, then you'll be given the opportunity to sign up to a subscription.
So you know what your pricing is yet?
Well it's going to vary.
So it depends on how long the course is.
So you can start as low as $20, a nice short introductory course.

(42:54):
And some of the more advanced ones may cost you up to maybe $100 or more.
I'm trying to though remember that my audience or individual learning professionals, we don't
always make a lot of money, we don't always have the opportunity for our employers to
pay for our own development.
So I'm trying to keep it a nice, affordable platform and hopefully that means lots of

(43:19):
people will be able to take advantage of it.
I have a couple suggestions for you.
So I don't, is this $20 for the course or is it $20 a month?
Is it a monthly subscription or per course?
How are you charging?
Well I was thinking of it per course but I am certainly going to be testing other models.
So yeah, I'm very interested in maybe a subscription type of deal.

(43:44):
But I know that means a big investment upfront because I have some subscriptions myself and
I really think about them.
So you have to make those decisions.
Yeah.
So I don't want someone to be turned away.
If they've got $20 I want there to be an option.
I'll probably have a subscription for someone who thinks I want to take advantage of almost

(44:06):
all these courses and if someone wants to start with one or two courses they can always
upgrade later.
Well, you might consider a prize for students, educators and nonprofits and just let them,
if they're that category, how they qualify as if they have a dedicated domain email that's
you know, whatever.edu or.org if it's a non-profit.

(44:28):
But they have to, they have to prove it and they have to use that email account with
it if they're going to use it.
Oh yeah.
It's not something you do in your business.
Do you have a?
Yes, yes.
A non-profit prize.
Yeah.
Well yeah.
Yeah, it is helpful.
But you know, on my employers for change schools are free but you have to use the school
email address.

(44:50):
If the person leaves then you know that okay, the account's down, you know, and where they
could have their permanent, you know, kind of like your regular email, your personal
email is with you for life almost.
Right.
The other suggestion other than, you know, give pricing based on that is you could always
put incentives in there.
So if they're helping, if they spread a word and you have as part of your sign up as who

(45:16):
told you about this platform, then that could be a way that somebody could maintain.
I thought of that.
Yeah.
After people take advantage as they get to the end of their 30 days, I've planned on
asking them, would you recommend this?
And if you do, you get another free month or you get, you know, Google reviews are
going to be the thing you need.
So yeah, it gets have them do at least like five things.

(45:39):
I use this with, you know, the podcast and then also the social media is like, here's
five things you can do.
And if you do these five things, this is what you get.
So one of them is leave a Google review.
The other one is leave us a review on LinkedIn, like wherever your people are, you know, that's
what it is.
Like subscribe, follow, share, whatever it is that you're telling them to do in your

(46:01):
social feeds.
That's another really good one.
Tag people so that it helps you, you know, and with the SEO thing.
Yeah.
Yeah.
But if they tag people, that could be somebody that's going to be a lead for you.
And if they want to be a brand ambassador, consider setting up maybe, you know, some
how you would roll it out so that there could be like a meetup and then you would have t-shirts

(46:26):
and things like that.
So that people are, you're going to go big, Margie, you're going to be.
Yeah, let's get some swag going.
I love that.
Yeah.
Yeah.
And you know, if you don't have the money for t-shirts, which can be expensive, I'm an
Asana certified pro and the brand ambassador, you know, they started sending me stickers
and it started with stickers and then it started with, okay, when I went to their headquarters,

(46:51):
I got a t-shirt and then they send me stickers and they have this whole education.
They built whole communities around it to help take a product to market and we were not paid.
But what they did is they help us get our customers.
So you could end up having that as your sales team, your brand ambassadors are rolling it

(47:13):
out.
Help you get bigger.
Well, maybe I'll come back and talk to you about that whole program.
When I get it figured out and we can tell people how they can become ambassadors.
Yep.
Yeah.
Well, I had an intern create that for me.
We use it.
Yeah.
Great.
A chat could do it for you.
Anyway, I want to thank you so much for being here today.

(47:38):
Thank you, Isabela and thank you everyone who's listening and I look forward to seeing you
all in my academy.
Thanks a lot.
Thank you to our sponsor, Cat by Studios and thank you to our video production team and
music is by Sophie Lloyd.
The intern whispers brought to you by employers for change, helping hiring teams to recruit
and upskill their intern talent and employees.

(48:01):
Learn more at www.eforc.tech to understand how you can benefit your company can benefit
from the mission values, beliefs and core skills when you're hiring interns or entry
level talent.
Subscribe to the interim whisper today and show your support by sharing our show, tagging
a friend and leaving us comments.

(48:22):
You can find the intern whisper podcast on employers for change YouTube channel or streaming
from your favorite podcast channel.
Thank you.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.