All Episodes

June 18, 2025 48 mins

Send us a text

AI isn't replacing leaders.
But it is revealing what kind of leader you really are.

In this powerful conversation, we sit down with Jacqueline Carter—author, of "More Human"  researcher, and senior partner at  Potential Project—to unpack what it actually means to be a more human leader in the age of AI.

We dive into:
✅ The 3 skills every modern leader needs (and why only 16% are ready)
✅ How AI can amplify—not replace—your awareness, wisdom, and compassion
✅ The surprising ways tech is quietly eroding trust, connection, and decision-making
✅ What leaders can actually do to stay in the driver’s seat
✅ And why people still crave messy, imperfect humans over perfect algorithms

Whether you're in HR or leading a function, this episode is your practical guide to leading with humanity and harnessing technology—before the culture gets away from you.

🔗 Bonus: Jacqueline drops a chilling story about a deepfake scam that almost cost one exec millions. Yes, really.

Tune in if you want to build future-ready leadership that your team actually trusts.

About Jacqueline: 

Jacqueline is a Senior Partner and Director of North America at Potential Project.  She is a leadership development and corporate culture expert who helps global companies create a more human world of work.

Jacqueline is co-author of the most recent book, More Human: How the Power of AI Can Transform the Way You Lead (Harvard Business Review Press, 2025), Compassionate Leadership: How to Do Hard Things in a Human Way (Harvard Business Review, 2022).  She also co-authored The Mind of the Leader: How to Lead Yourself, Your People, and Your Organization for Extraordinary Results (Harvard Business Review, 2018) and One Second Ahead: Enhance Your Performance at Work with Mindfulness(Palgrave Macmillian, 2015).  Jacqueline is known as an inspiring, dynamic and energizing speaker with a focus on highly engaging, creative, and impactful experiences. In addition, she writes for leading publications such as Harvard Business Review, Forbes, Americ

Disclaimer: This podcast is for informational purposes only and should not be considered professional advice. We are not responsible for any losses, damages, or liabilities that may arise from the use of this podcast. The views expressed in this podcast may not be those of the host or the management.

Thanks for listening!

Hey! We love new friends! Connect with us!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
It's like a lot of organizations right now.

(00:01):
It's like they're rolling outhigh-end Ferraris but not
teaching anybody how to drive,and that's just a waste of money
.

Speaker 2 (00:21):
Welcome to your Work Friends.
I'm Francesca and I'm Mel.
We are breaking down work, soyou get ahead, mel.
What's the good word?

Speaker 3 (00:31):
I'm heading to Rhode Island and excited for that.
How about you?
Very nice, pretty chill overhere, pretty chill over here.
I showed Robbie the picture ofEnzo on the first day of first
grade and the last day of firstgrade of Enzo on the first day
of first grade and the last dayof first grade.

Speaker 2 (00:46):
For those of you who don't know, I have a
seven-year-old.
He started first grade with ashaved head.
He ended first grade with thismop of curls, a gold chain.
Dressed in all black, he eitherlooks like he's a sophomore in
college, at USC, or is going tostart really getting into creed
sometime soon.
I don't know.
It's all over the map.

Speaker 3 (01:03):
I love his style evolution.

Speaker 2 (01:05):
We let him dress the way he wants to.
He picks out all of that.
He asks for a chain.
We got it from TJ Maxx orsomething.
It's not like he has a realgold chain or anything like that
, but it's interesting watchingyour kid make choices.

Speaker 3 (01:18):
Yeah, it's just fun to watch their personalities
evolve.
I don't know, I think it'sreally cool, but I do too.

Speaker 2 (01:24):
I will allow him to wear anything but sketchers.
My child will not be wearingsketchers.
What's your beef with sketchers?
I just I cannot, I like cannotstand that brand.
I don't know what it is it'slike a joke it's like a joke.
I don't like a joke.
I don't appreciate it.

(01:46):
I don't.
It's not real, I need it to be.
Oh my God, every once in awhile, especially when he was
younger, he'd pick some up andsome of them had.
They would light up underneathand I'm like, absolutely not
Like.
I will let my kid wear thecraziest shit, except for
Skechers.
Absolutely not.

Speaker 3 (02:03):
No, I love it Speaking shit except for
Skechers?
Absolutely not.
No, I love it Speaking of style.
We launched some merch on ourwebsite.
We are independent and we wantto keep it that way, but if you
feel so inclined, check out someof the merch that we put up
there.
We thought there's some prettycool designs over there.
To check Any purchase that youmake helps us stay operational,
so appreciate your support andyou get a cool hat or sweatshirt

(02:27):
or something.

Speaker 2 (02:28):
Good hats for summer sunscreen.
That all works out.
It's all good stuff.

Speaker 3 (02:31):
We had such a great conversation earlier this week
with Jacqueline Carter.
She's an author, speaker, asenior partner and a director
for the North America PotentialProject.
She's an expert in leadership,development, mindsets and
corporate culture and she justcame out with this book More

(02:51):
Human.
This is an opportunity rightfor us to lean into our humanity
in the workplace and really seeAI as a partner, but also
taking some precaution as we gothrough this evolution.
What did you take away fromthis conversation?

Speaker 2 (03:07):
We've been talking about the future of work and AI
and even things like oh, we haveto lean into our deeply human
skills for the last 10 years.
This is not something that'snew.
What I think is so differentabout what Jacqueline and More
Human, the book and thepotential project are talking
about is they're making itreally easy to lean into those

(03:28):
more human skills that youreally need to.
In this conversation and alsoin the book, jacqueline outlines
this trifecta of how to makeyourself a more human leader.
As technology takes on more andmore of work.
How do you lean into thathumanity, that trifecta being
awareness, wisdom and compassion, and I love that, because you

(03:51):
and I have both seen this thesedeeply human skills as a laundry
list of 30, 40, 50.
And you're just like JesusChrist.
How many do I have?
These are the top three.
If you're going to do any, dothese?

Speaker 3 (04:00):
Yeah, I really liked that.
I also liked the concept ofmoving away from either or.
You could either have AI orhumans.
You can't have both, but thisis a both and conversation.
There's just a lot of power inthat.
Everybody's going through thisshift.
I don't think I get any newsalerts that don't involve AI in
the title these days, but if youand your team are moving

(04:22):
through, this is a definite bookto check out With that.
We think you should check thisconversation out.
So here's Jacqueline.

Speaker 2 (04:50):
So I think every day, maybe multiple times a day, I'm
asking somebody, sometimesjokingly what timeline are we
living in?
Because it just feels like somefascinating times for various
reasons, but really amazingtimes for opportunity.
And I'm looking at this momentin time around AI, this moment
in time around humanity.

(05:11):
What made you see this as amoment in time, as like a fork
in the road, especially forleadership?

Speaker 1 (05:17):
So, as you guys know, I'm part of an organization
that's a potential project.
We are a research and leadershipdevelopment organization.
We've been focusing onresearching and supporting
leaders and global companies tobe able to enhance their
potential for the past 15 years,and what we're really excited
about is we really do see thisis an amazing moment in time in

(05:38):
terms of human leadership andwhen we look at it, we really
see that with artificialintelligence and, specifically,
of course, generative AI, wehave the opportunity to really
make this major shift frommanagement, which none of us
really liked to be able to lift,into leadership, which is
really about elevating ourability to be able to enable

(06:00):
other people to realize more oftheir potential.
And if and I think that's thebig if we're able to navigate
this, we really see a potentialopportunity for a future of work
that is really quite inspiringand, I think, one that could be
really quite hopeful and flushwith possibilities.
At the same time, we also see alot of darkness, and I think
that's really why it's neverbeen more important to be a

(06:22):
leader, and it's never been moreimportant, as a leader, to be
able to lean into the kind ofchoices that we need to make
about the future of leadership.

Speaker 2 (06:32):
One of the things I'm curious about is you mentioned
these more human kind of aspectsof things.
Like we have this opportunityto really reach this different
level of leadership potential.
How do you define more human inan AI-powered world?
What does that look like?

Speaker 1 (06:47):
We come at it very much from a research and data
perspective.
So what we've done, like I said, over the past 15 years has
really distilled what we see asbeing three core qualities of
leadership excellence in termsof being able to bring the best
of our human potential.
And they're going to soundsimple but simple is not easy.
And they're going to sound likecommon sense, but common sense
is not always common practice.

(07:07):
So, fundamentally, a first corequality is awareness being able
to be aware of what's going oninternally and also aware of
what's going on around.
The second core quality iswisdom, and this is very
different than knowledge, butwisdom is basically the
discerning capacity of mind, tobe able to discern what's
happening so that I can makewise choices.
And the third core quality iscompassion, and compassion the

(07:30):
way we define it is to be ableto do the really hard things
that we need to do as leaders,but to be able to do them in a
human way.
So when I operate withcompassion, say, I'm able to
give really tough feedback, butdo it in a way that helps you to
be able to hear it, so that itsupports you in your development
journey, as opposed to youfeeling degraded and depressed,
and what we see from a dataperspective is that only 16% of

(07:54):
leaders are really ready to beable to lean into these core
skills that we believe arecritical in the age of AI.
60% have potential, but 24%probably really shouldn't be
leaders.
I think we've all seen leadersthat have been promoted because
they had great technical skills,but they really don't have
these human skills that we knoware critical for the future not

(08:17):
only be able to leverage thebenefits of AI and overcome the
risks, but we know there's goingto be massive transformation in
the world of work in the comingyears and we need leaders who
can really lean into those humanelements to be able to guide
the workforce and create thework context of the future.

Speaker 2 (08:33):
First of all, those data points resonate.
I think all of us that haveworked especially in corporate
you're like, yeah, that tracks,that tracks.
Maybe you haven't had a leaderthat leads with awareness,
wisdom and compassion, or maybeyou had one and you remember
them for the rest of your life.
Right, they just make or breakyour work experience.
We're not seeing a lot oforganizations invest in what we

(08:57):
will call these deeply humanskills.
Yet, to your very good point,only a small percentage of
people are ready.
Most people aren't gettingtrained in those.
I'm curious about that momentin time where the mindset shifts
, where people start to pull inthat direction.
What does that look like?
What's that mindset shift thatseparates someone that has that

(09:17):
AI augmented leader trifecta ofawareness, wisdom and compassion
?
What is that shift that peopleare making?

Speaker 1 (09:24):
Yeah, I do love that you point out that not enough
organizations are investing init, and that's one of the things
that we feel very privileged iswe, of course, work with many
global companies that areactually prioritizing the human
development aspect in parallelwith the AI advancements, and I
think when we see organizationsthat say, yes, we've got to be
able to roll out the technology,but at the same time, we want

(09:46):
to make sure that we emphasizethe human, the way that we see
that, it's like a lot oforganizations right now.
It's like they're rolling outhigh-end Ferraris but not
teaching anybody how to drive,and that's just dumb, that's
just a waste of money.
And so the light bulb for usreally goes on for leaders is
when you know these aspects ofawareness, wisdom and compassion
and that's why I said they'recommon sense is because, when

(10:10):
you dive into them, we allinnately have these capabilities
it's whether we have thepermission to be able to develop
them, the permission to be ableto see the benefit and we again
look at it very much from aresearch and data perspective
Leaders that have these highawareness, wisdom and compassion
are able to create theconditions where people feel
more empowered, where greatertrust, greater performance,
greater job engagement, when youcreate the conditions where

(10:32):
people know that there's a prizeat the end and, at the same
time, you give them a path to beable to support them in that
development.
Oh, I can develop compassion.
Yeah, I may have a set pointwhere I'm good or not good, but
there's a journey and I can seehow to be able to take that
journey and to be able to besupported along the way.

Speaker 2 (10:51):
Yeah, yeah, I really love to.
By the way, that it's three.
Having been in talentdevelopment my entire career and
we've been seeing this deeplyhuman skills that you're going
to need.
It's a laundry list.
It's typically a laundry listof at least 10.
And to be able to have thisdistilled down into a framework
of look, this is the three thatwill reap the most benefits that

(11:14):
you really need to be focusingon Huge One question that might
be a little controversial.
I'm going to just ask it.
Here we go.
All these organizations areinvesting heavily in tech but
not in the human aspect of it.
Buying a Ferrari and not havingsomeone know how to drive,
would you say.
People need to invest in thehuman first, or they are better

(11:35):
suited to invest in teachingpeople how to drive first before
they buy the car?

Speaker 1 (11:39):
I love the question because one of the other things
that we found in our researchand it was a little bit
surprising to us was that if youjust like and we call them
human purists so let's say, ifyou just invest in the human,
which we were like, yay, thiswould be so great.
Imagine an organization thatjust invested in the human and
we're like this would be awesome.
But the reality is that thesmartest human being is a little

(12:02):
bit smarter when they leverageAI, so it really does have to be
a both and and.
That was, I think, one of thekey insights that we came is
this paradox.
So the journey, we believe youneed to start with the human,
because if the human, like greattools in unskilled hands are
not great tools, like you can doa lot of damage with a hammer
or you can build a house with ahammer, so you need to be able

(12:24):
to have the right skills.
So you need to invest in thehuman development.
But the opportunity now is toaugment these great human
capacities that we have.
But we can augment them with AI.
My awareness enhances when Ileverage AI.
My wisdom enhances when Ileverage AI.
My compassion even enhanceswhen I leverage AI.
So it's really a both and fromour perspective.

Speaker 3 (13:10):
I'd love to drill down further into what each of
these look like in practice,alongside AI, because in the
book there were some really goodcase studies, if that's okay.
Yeah, awareness, when you thinkabout that skill set, that
human capability what does thatlook like daily, alongside the
use of AI tools?

Speaker 1 (13:28):
Absolutely.
Definitions are alwaysimportant, because awareness can
sound like a really big word,but the way that we look at
awareness is the ability to,like I said, be aware of what's
going on internally and, at thesame time, be aware, to the
extent that I can, of what'sgoing on externally, and we know
.
Let's just take a simpleexample situational awareness.
Right, it's been for manyleaders for a long time.

(13:49):
We've been told that noteverybody is the same and so we
need to be able to besituational.
And Mel, what you like isdifferent, francesca, than what
you like, but that's really hard.
Like, how can I, as a merehuman, be able to really keep
track of oh?
Can I, as a mere human, be ableto really keep track of?
Oh, when I communicate to Mel,it's a little bit different than
when I communicate to othermembers of my team.
And these are the kinds ofthings that AI is really good at

(14:10):
.
Ai is really good at being ableto track what Mel is, what's
important to you, what kind ofmessages resonate with you as
opposed to what would be usefulfor other employees, and so
again, and that's why thatawareness of being able to be
more aware of what's happeningwith my employees what's
important to them can reallythen enhance me.
My awareness is lifted.

(14:31):
And these are just someexamples.
But we also see, like sentimentanalysis.
I sent out an email to myorganization.
Ai tools can let me know was itopened, did anybody read it,
how long did they read it forwhen they responded?
If they responded, what was the?
Did anybody read it?
How long did they read it forwhen they responded?
If they responded, what was thesentiment associated with that
response?
And this is gold, because thiscan then enable me to be able to

(14:52):
enhance my ability tocommunicate more effectively.
And I think these are just someexamples.
But that's again from anexternal perspective.
But then from an internalperspective, ai can help to be
able to challenge me on maybe mybiases, on my limitations, on
my blind spots, to be able tosupport me in a development
journey if I'm open to that.
So these are just again some ofthe ways, but we just see it's

(15:14):
a really an amazing tool to beable to support enhancing my
awareness.

Speaker 3 (15:19):
Awareness and wisdom and compassion are all
critically important right Forthe future for this to be
successful.
But it stood out to me.
It felt like compassion mightbe more of the linchpin here,
because I think you hadmentioned it's the one thing
that AI can't replicate.
What crystallized that for youfrom your research?

Speaker 1 (15:39):
Yeah, and I would say the way that we see awareness,
wisdom, compassion, they're alllinked right.
It's how, neurologically, howthe theory of mind, how the mind
works, like we perceive, thenwe discern and then we respond,
and so very much that responsivecapacity of mind in an ideal
world for leaders that want tobe effective, is compassion
right To do those hard thingsand do it in a human way.

(16:01):
And what's super interesting isthat we really have been with
great interest following theadvances of AI.
We know that right now, peopleactually prefer and sometimes
feel like an AI chatbot is moreempathetic than a busy leader,
right, which is not surprising,because an AI chatbot has all

(16:21):
the time in the world to say ohMel, I'm so sorry that you're
having that problem, how can Ihelp?
It isn't rushed to be able toget to the next meeting.
But the key thing and I thinkthe reason why, mel, to your
question, why it's the mostimportant is because, even as AI
gets better and better at beingable to mimic human emotions,
it's programmed, of course, withall the intelligence that we

(16:43):
know around emotionalintelligence, around human
psychology, human behavior.
Fundamentally, human beingswere social beings.
We feel each other, we thrivebased on each other and
fundamentally, even though andthis was so interesting what the
research shows is, even thoughpeople found that the AI chatbot
was more engaging, they feltempty inside, they felt fooled

(17:03):
when they found out that it wasan AI chatbot, because,
fundamentally, human beingsprefer messy but authentic other
human beings than perfect,programmatic, empathetic AI.

Speaker 3 (17:17):
Yeah, of course, in the news, just like when video
games were villainized, right,you think of some of the horror
stories that are also coming outas well, because AI is
essentially acting as a mirrorof the person who's using it.
It's interesting.
I'm curious about these threepillars, because you mentioned
what was it?
16% have these skills and 60%can use some training on it.

(17:40):
That's a pretty big gap, andthen 24%, who are never going to
get there.
Of these three pillars, whichdo you feel people struggle with
the most?

Speaker 1 (17:49):
Yeah, I love that question and maybe I'll just say
this, and I hope it's okay butwhat we did find is, within that
16% one, in four women, onlyone in 10 men.
That's a whole nother podcast.
I just wanted to say, yeah,exactly, very interesting data.
It's really interesting aroundthat 60%.
A couple of things thatsurprised us.
Sometimes a non-result is asinteresting as a result.

(18:12):
One of the things that wassurprised us is we were
surprised that there wasn't moredifferential around level, so
we expected to see a realdifference in seniority and,
specifically, around wisdom.
We just made the assumptionthat people that were and, by
the way, I should say that datathat I shared that's not based
on leaders rating themselves,that is, employees rating their
leaders.

(18:32):
So this is in the eyes of theemployees.
So, based on 360 data, do I seeyou, as a leader, as being able
to demonstrate these qualitiesof awareness, wisdom and
compassion, which is quitedifferent than when leaders rate
themselves?
So what's interesting is that.
I would say, though, that still, our experience working with
leaders is that most more seniorleaders have figured out how to

(18:58):
manage their mind, how tomanage their time, which is a
lot around the importance ofawareness.
Do I know what to focus on?
When am I able to read the tealeaves, to be able to say this
is most important and I can letthis go.
Most of them have a good NorthStar, which is really around
that wisdom capacity, and that'swhy it is the one that we
emphasize.
I do think especially moresenior leaders struggle with the
compassion piece, andoftentimes what we do see, and

(19:21):
what's really interesting, isthat we see, as leaders rise in
ranks, their ability to engagein a compassionate way in the
eyes of their employees goeslower, and that we find really
interesting.
And it makes sense because, ofcourse, as you rise in ranks,
you're making bigger decisionsthat affect more people, you
have a bigger span of controland you don't necessarily have

(19:41):
those same touch points, and soit makes sense that a leader
might be seen as being lesscompassionate.
But the key thing is that wealso see leaders that use that
as an excuse, and what we see isthere's a real opportunity
because we know, just becauseyou have a big span of control,
we know that doesn't mean thatyou can't show up with
compassion.
That compassion piece isprobably the one that all

(20:03):
leaders can develop and I wouldsay, at more lower levels of
leaders, we do see a lot of thatawareness.
Right, it is that startingpoint, because you can't really
dive into wisdom and compassionif you don't have good awareness
about what might be getting inyour way, if you can't manage
your mind and manage your time.
We were surprised that therewasn't more differential by
seniority, but that is ourexperience working with many

(20:25):
leaders over the past decade.

Speaker 3 (20:27):
It's really interesting to see that, but I
could also see why it's probablythe lowest with compassion just
given when you grow, If yourorganization isn't going to give
you opportunities to nurturethese things while we're going
through this massivetechnological shift at work.
What are simple ways people canstart to nurture these things

(20:48):
while we're going through thismassive technological shift?

Speaker 1 (20:50):
of work.
What are simple ways people canstart to nurture these three
qualities in themselves today?
Yeah, yeah, I love thatquestion and I've always been
inspired by the quote be thechange you want to be in the
world If you want to be a goodhuman being, which is really
around.
What these skills are?
Be present, be wise, be caringand those are, I think, key
things in terms of your ownbrand.
And, of course, there's a lotof resources.
Of course, our book is oneresource to be able to provide

(21:11):
some practical tools around it.
Probably the starting point isreally around the intentionality
, and we oftentimes, when wework with leaders, we look to be
able to have simple brain hacksto be able to help you In this
moment.
Like, what's your intention?
And right now?
My intention is to be ofservice.
That's my intention.

(21:31):
If you ask me a question, I'mgoing to try to be of best
service, and just those simplethings can really help us as
leaders.
When I show up for my team,it's like I just want to be
present and I want to be able tobe able to support everybody in
the best way possible.
I'm just going to be a goodlistener.
Whatever it is, the startingpoint for all of us is really
around setting our intentionsand using that as our North Star
, because we know being a leadertoday is really hard.

(21:54):
Let's be real, yeah.

Speaker 3 (21:56):
I think if that's all you can do is set the intention
and always go in with that'salways a good starting point.

Speaker 1 (22:01):
Then afterwards have opportunities for reflection,
say how did I do?
And then you get that learningcycle.
So these were my intentions,this is how I wanted to show up
in this meeting and then to givemyself the space and the grace
to be able to say, okay, how didI do, what did I learn?
What do I want to implement fortomorrow?

Speaker 2 (22:19):
I want to talk a little bit more about the both
end, the both end because I feellike there is this reality,
especially with folks right now.
their companies are probablylike get to know AI, understand
AI, your job's not going to goaway.
The person who knows how to useAI is going to take your job.
We're hearing all of the tropesand we know that those folks

(22:42):
that lean into these really morehuman skills are the ones that
are really going to thrive, notonly for themselves, but,
honestly, for their team.
Who wouldn't want to work witha leader like that?
I'm curious about how peoplecan start to tiptoe into this,
especially that both endthinking and really make the
power of AI and our humancapabilities work.

Speaker 1 (23:00):
It's really the best marriage of mind and machine and
the way that we look at it, andthis was really based on
hundreds of interviews that wedid and also our data collection
but when we looked at each ofthese different qualities,
there's a really nice kind ofboth and aspect of looking well,
what's the best of tech andwhat's the best of human, and so
, for awareness, the way that weframed it is in terms of AI is

(23:23):
amazing at content, like morecontent than any of us could
ever grasp, but human beings areamazing at context.
Why am I here?
What's important?
What are my intentions, whatelse is going on, what else is
relevant?
And that ability to be able tomarry that context setting with
then leveraging content is a wayto be able to get the best of
both.

(23:44):
On the wisdom side, ai isamazing.
Any question that you have,it'll give you an answer.
And what humans, though, arereally good at if we have the
time and space is really good atbeing curious, beginner's mind
like to being able to thinkoutside the box and our critical
thinking to be able to.
When we get an answer from AI,I'm not really sure that's a
good answer.
What would be another question.

(24:04):
So this marriage of questionsand answers is a way to again
really have that both andthinking.
And then, on the compassionside, the way that we looked at
the both and was really humanbeings fundamentally are able to
connect with their ability tocare, their ability to create
trust, their ability to look atyou and say I care about you,
you're important to me, and tobe able to lead with heart.

(24:25):
And AI, like I said, it'sprogrammed with all of the best
knowledge of human behavior,emotional intelligence, and so
another both end is to be ableto say okay, I care about you
guys, and how can I be able touse that care and leverage AI to
be able to help me?
Because we have to have adifficult conversation or we
need to move an agenda forwardand I don't know where to start,

(24:45):
but I want to be able to engageus in that process, and so
those are some of the key thingsthat we really see as being a
way to be able to marriage thebest of both minds and the best
of both technology.

Speaker 2 (24:56):
Do you see that changing as AI gets more eugenic
and gets smarter?
Do you see this changing or doyou see these are evergreen?

Speaker 1 (25:06):
It was one of the questions that we asked and that
we continue to ask in ourresearch, and so far we do see
that these are evergreen, andthat's why I think it's so
interesting is that even thoughAI is getting better at, let's
say, context, it still doesn'thave the amazing wealth of
understanding and experiencethat a human does.

(25:27):
And I think that even when welook at agentic AI like it still
is at this point in time andagain we're looking at a horizon
of the next three to five yearsit still needs to be told what
to do, it still needs to haveground rules and it still is
limited in terms of what it cando.
And even though it's got areally big box, still is limited
in terms of what it can do.
And even though it's about areally big box, thinking outside

(25:47):
that box is still somethingthat is in the realm really of
still of us mere mortals.
So, at least for the next threeto five years, we see these as
evergreen and hopefully that,leading with heart, our
aspiration, our hope is thatwill always be augment with AI,
but something that is evergreenin terms of bringing out the
best of our human leadership.

Speaker 2 (26:08):
I also am really taken with your finding about
even though AI can communicatewith emotion, if you will people
.
When they found out that thebot or the chatbot was a chatbot
, they were left feeling empty.
I'm very taken with the factthat people still innately want
a human right and I wonder ifthat's never going to change,

(26:28):
even when AI becomes likeminority report and singularity
and all this good jazz.
I wonder if there's some sortof magic, juju, that we're
always going to want a human, nomatter what, and these things
are always going to be the case.

Speaker 1 (26:40):
Yeah, I deeply hope so, and I do think that is the
case.
The problem, though, is thatthese are at risk Our human
connectivity.
We know that there's anepidemic of loneliness and this
was before Gen AI came out andwe know that organizations that
are heavily AI dependent peoplefeel even less connected and
more lonely.
Why?

(27:01):
For a number of reasons.
One, because people overuse thetechnology.
Right, they use the technologyto be able to send a message
that really should be aconversation, but we also know
that because, when anorganization that's heavily
embedded with AI, people aren'tasking each other questions.
They're using AI to be able toask the questions, so they're
not turning to their neighbors.

(27:21):
And the other thing, criticalthinking.
We know that 74, the recentstudy this wasn't ours, but a
recent study showed 74% ofleaders are so overwhelmed that
they would prefer to have achatbot make their decisions,
and that's scary, but real right, and so I think that the
problem right now is that ourawareness, our wisdom and
compassion is under threatbecause of AI it's creating.

(27:44):
We're more distracted.
We have the risk of, I will say, instead of being wiser,
actually being dumber if wedelegate our decisions to AI,
and being more disconnected andwhat we really need to do, and I
think that's why conversationslike this are so important.
We need to be reallyintentional about overcoming
these real risks of artificialintelligence, so that we were

(28:07):
able to leverage the benefitsand not get sucked into kind of
the dark side of where thefuture of work could be going if
we're not careful.

Speaker 2 (28:13):
Yeah, yeah, I wonder what you'd recommend.
Let's say somebody is I'll takemyself, for example, and I know
a lot of people I talk to arethe same way right, they have a
large language model.
They're using gpt, cloudperplexity, whatever doesn't
matter, they're using that.
It's their little assistant onthis side.
They're using it more and moreeach day.

(28:34):
they're reaping the benefits ofthe efficiency of it and maybe,
slowly, they're talking tohumans less and less by 30
minutes yeah, how do they likebreak that cycle potentially and
or make sure that they're likecarving out space for more of
the more human attributes?
Like you know how people havephone addictions, it's just put

(28:54):
it away.

Speaker 1 (28:55):
You walk away from it .

Speaker 2 (28:56):
What what do people need to be really thinking about
, so they don't get into thetrap of only using their large
language model?

Speaker 1 (29:04):
First of all, I love the question because I do agree.
We do know that people areaddicted to their phones Many
programs actually that we dowith leaders.
One of the most simplestintervention that we do is we
take away their devices and youshould see the looks on their
faces Like it's just we've takenout their heart, like how could
you like what?
I'm going to be disconnected.
And it's so interesting thatthey actually do go through

(29:26):
withdrawal symptoms becausethey're not like, oh my gosh,
what if somebody needs me?
And it's really interesting.
Many of us are addicted to ourtechnology and I do think that
with these tools, because theyare the large language models,
as you said, they talk to usreally nicely, they're designed
to please, they're reallyengaging, to be able to have
conversations with, and theynever get mad at us like real

(29:46):
human beings, real colleagues do, and they're designed to suck
us in.
These are money, these are notaltruistic devices that have
been created for the best ofintentions, and so they're
designed to suck us in alldifferent kinds of ways.
So what I love about yourquestion is that we need to be
able to make sure that we stayin the driver's seat.
Back to the Ferrari analogy, weneed to be able to make sure

(30:08):
that we stay in the driver'sseat.
Back to the Ferrari analogy weneed to be able to make sure
that we're in the driver's seatof our technology and that we
recognize, because many of usthink we're smarter than our
smartphones and we're not Like.
Our smartphones are designed tobe addictive and until we wake
up to that fact, we'll say, oh,I'm not addicted to my phone.
It's like all right, let metake it away.
Oh, wait a minute.

(30:31):
So I think that we need to beaware that these tools are
designed to be able to suck usin and really promote use, which
, again, is wonderful becausethey're really useful to be able
to help support us in our dailyactivities.
We need to be really practical.
Like you say put the deviceaway, get up, go for a walk, put
the device away, have aconversation, put the device
away, have a conversation, putthe device away, take some time

(30:52):
for reflection in terms of yourto enhance your creativity,
enhance your ability to be ableto think outside the box.
So I think that you need brainhacks to be able to help you to
not get sucked into thetechnology, because they're
designed to be addictive.
They put us in echo chambers,and that's another risk that we
need to be intentional about toovercome.

Speaker 2 (31:07):
Yeah, yeah.
It's a very odd feeling whenyou realize you are addicted to
your phone, even the musclememory of reaching for your
phone the other day.
I have Claude and I have chatGPT and I have found I have
started going straight there asopposed to wait.
What do I really think?
What do I really need to beresearching?

(31:28):
And so it's almost not gettingrid of the muscle or not making
sure I have atrophy or likehuman atrophy or addiction, and
it is a job that is a veryintentional practice, but I
think it's needed, yeah.

Speaker 1 (31:41):
What I loved about what you said is exactly that
it's got to be a practice.
Exactly, it's so easy, let'ssay, I've got to brainstorm, I
need to write a new article, andit's so easy to go into
whatever tool that you're usingand say, all right, write an
article for me in the style ofHBR.
I could even write an articlethat Jacqueline Carter would
write in HBR, because it doeshave access to the web and it's

(32:02):
so tempting to the web and it'sso tempting.
What I loved about what yousaid is no, I've got to force
myself.
It's like going to the gym.
I've got to force myself tomake sure I continue to go to
the gym.
And that's the other analogythat we use oftentimes when
we're talking about AI and howit can augment.
It's like looking at it like anexoskeleton, right, so an
exoskeleton.
We know that it helps us to beable to enhance our strength.

(32:24):
And AI can be like anexoskeleton that can really help
us to augment our mind andaugment our heart.
But if we don't, at the sametime, develop our mind and our
heart, it's going to atrophy.
If we just let that exoskeletondo all the work, our muscles
will atrophy.
And I think what you said isexactly.
It's a practice to be wait aminute.

(32:51):
What do I think?
How would I write this article?
Wait a minute, what do I knowbefore I go to my tool?
What would be a good way to beable to create this presentation
or to be able to have thisconversation and then augment
with the tool to be able to helpyou but don't lose the muscle?
And I think that's exactly it.
We're really at risk of losingsome of these core, fundamental
muscles, like critical thinking,like emotional intelligence,
because we're over relying onour technology.

Speaker 3 (33:12):
The addiction to your phone is so real.
I don't know if I don'tremember where I saw this, but
someone mentioned if you startto have this little indent on
your pinky finger where you holdyour phone, that means like you
forever have changed like thebone structure of your finger
from where you hold your cellphone and like you forever have
changed like the bone structureof your finger from where you
hold your cell phone.
And I looked down and I waslike, is that a dent?
And I started to slowly backaway.

(33:33):
For anyone listening, checkyour pinky when you think about
getting into some of thoseethical guardrails, as we're
talking about not letting thesemuscles atrophy.
We're introducing this to teams.
Francesca and I are trying toadvise folks on, like how to
introduce this to your teamwithout fear, like testing and
learning in a safe way.
Given everything that you'veresearched, what's like a one

(33:57):
sentence AI policy for aleadership?

Speaker 1 (34:01):
team.
Oh, I love that One sentence.
Policy I would say is human inthe driver's seat is do not, do
not allow these tools toovercome your human judgment,
your human responsibility, yourhuman accountability, and be
aware of the seductive nature ofthese technologies to be able
to to delegate decisions.

(34:22):
If I was going to have one wordpolicy, it would be always
human in the driver's seat.
And then, of course, you saidjust one.
But I do think we are deeplyconcerned about considerations
about using these technologiesin terms of the environmental
impact.
We are concerned about datasecurity and privacy, which is
already a concern.
It was a concern beforeartificial intelligence and now

(34:44):
all of this information.
And who's storing thisinformation?
How is it being used?
So there's a long list, but theone is the human in the
driver's seat.

Speaker 3 (34:53):
By the way, the smartest policy possible when
you're looking at that workdayclass action lawsuit, exactly,
yeah, okay, I love to hear itwhen you think about the case
studies, because you hadmultiple that were highlighted
in the book, case studies thatduring your research that kept
you up at night could be good orbad, but was there one case
study in particular that keptyou up at night?

Speaker 1 (35:14):
There was one.
We didn't put it in the book.
So we had the privilege ofbeing able to talk to chief
people officers, chief learningofficers, ceos as well as tech
leaders, chief learning officers, ceos as well as tech leaders.
And probably the story thatscared us the most and we were
shocked by this and I will notsay the name of the company, but
it was a story we were sittingdown with the chief human

(35:37):
resource officer of a globaltechnology company and she told
us a story about a seniorexecutive in the organization
that had been basically deepfaked by somebody posing as the
CEO of that company and wasabout to transfer millions of
dollars and it scared thebejeebies out of us and this was

(35:57):
actually like a year and a halfago and I think it gets back to
.
We all think that we're smarterthan our smartphones, we think
we're smarter than our devices,but this it was just.
It was unbelievable because Iwould think, oh, that would
never happen to me.
And when and when she talkedabout this case, like the guy
had emails from his CEO, he hadtext messages, he had video

(36:20):
little snippets telling him hewas on a secret project and not
to tell anybody about it and ithad been an extensive scam that
had been over multiple months,and this guy had absolutely no
idea and he had been completelyhoodwinked by it and it was just
like whoa, that was yeah.
So that was really scary.

Speaker 3 (36:39):
Yeah, as the video continues to get better and
better On TikTok right.
Has anyone seen the fake TomCruise?
Oh, fake, Tom Cruise is crazy.

Speaker 1 (36:49):
What is this?
And I think it's great.
Yeah, it is really scary, and Ido think that we are so
susceptible to, if we seesomething, even if somebody says
that it's created by artificialintelligence, we have a tough
time unseeing it.
It's part of our neurology,right, like we trust what we see
and that is how our brain hasbeen designed and wired over so
many centuries, and so, even ifsomebody says, oh yeah, that was

(37:13):
fake, it's no, it still stickswith us because we saw it, so
it's real.
And so I think one of thebigger, larger concerns that we
have is just around thecontinued what's real, what's
not real, fact versus fiction,but not only that like how we
are so influenced by our quote,unquote peers, our tribes and
how.

(37:33):
Again, social media and I thinkthat's one of the things that
we focused a lot on.
Human beings have always had anamazing history of introducing
new technologies withoutnecessarily looking at the
negative potential consequences.
Social media was designed tomake us better connected, and
how's that working out?
Email was supposed to save us aton of time, I don't know, and

(37:54):
so I think that's for us.
One of the big things is thatreally started to scare us when
we started to look at thistechnology is how fast it's
moving, how fast it's beingpushed, like every organization
right now and if they're not,they should be is pushing
adoption of AI, and they shouldright, because they got to get
ahead all their competitors, soevery organization is pushing

(38:15):
adoption, but I don't thinkwe're spending enough time
thinking about wait a minutelike what are the potential
consequences of this adoptionand are we taking the time to
pause and say what are wepotentially at risk?
And that's really a lot of thework that we do with leaders is
we talk about the adoption andhow to be able to embrace it and

(38:35):
we talk about I think,francesca, to your point like
how to have the brain hacks thatyou say, put away the device.
Let's just make sure I'm stillusing that muscle that I have as
a good leader, as a good humanbeing.

Speaker 3 (39:04):
Okay, Jacqueline, are you up for some rapid round?

Speaker 1 (39:07):
I am.
I'm a little bit scared,honestly, Mel, because I don't
know what's coming.
But bring it on, I love it.

Speaker 3 (39:12):
I promise these are harmless and fun, and hopefully
you will have fun with them too.
Okay, it is 2030, not far off,by the way.
What's work going?

Speaker 1 (39:22):
to look like no idea.
Very simple Anybody that tellsyou that they know what the
future of work looks like ismaking things up.
I can tell you two things,though, that I know for sure
about the future of work in 2030.
The first thing is it isfundamentally AI enabled and it
doesn't look anything like whatwe see it as today.
And the second thing and thisis both my prediction and also

(39:45):
my aspiration, so there's alittle bit of hopefulness is
that those of us that are ableto double down on the best of
our humanity will be the onesthat are thriving in the world
of work in 2030.

Speaker 3 (39:57):
I love to hear that.
What's one thing aboutcorporate culture?
You're ready to see die already.
You're actually excited itmight be gone by 2030.

Speaker 1 (40:07):
I do think that there are so many.
You guys, of course, have beenaround the halls of corporate,
of the corporate world, for solong.
There are so many bureaucratictendencies box checking, ticking
, activities reports that nobodyreads, emails that are just out
of control, activities reportsthat nobody reads, emails that
are just out of control.
And I guess I am really excitedabout the opportunity for us to

(40:27):
rethink work so that what workreally becomes is the
opportunity for us to reallythrive in terms of human
connection, ultimately, theopportunity for us to be able to
inspire each other, to be ableto connect with each other.
That's the way we get greatideas, that's the way we build
trust, that's the way we engageour customers in a way that
makes them feel, wow, these areawesome people to work with, and

(40:54):
I just think there are still somany bureaucratic elements of
work today that we've beentalking about for years to be
able to let go of.
So I hope to see those shift,and probably I'll say one thing
is meetings where nobody knowswhy they're there and there's no
agenda and everybody thinksit's a waste of time.
Any time we can do that, let'sdo it now.

Speaker 3 (41:12):
There was a tool that was out a few years ago.
Francesca and I were like howdo we tap into this?
That used to tell people.
I forget who is using it, but Iread this article where one
organization, anytime they setup a meeting, it told you the
potential cost of that meetingbased on who was in the room.

Speaker 1 (41:28):
And I'm like genius we all need that Nice and of
course, it's something we canget into.
But a lot of AI tools, if weuse them well, like they, can
give us a summary.
Was this a good use of time?
Did everybody contribute what?
were some things that could this.
There's a great tool right nowthat you can say could this
meeting have been an email?
Ai can really help us to beable to look at the quality of

(41:51):
our human interaction and helpus to be able to lean more into
that.
If we use it well, if we use itthat's the key word.

Speaker 3 (41:59):
You might have already answered this, but I
want to ask just in case youhave a different response but
what is the greatest opportunitymost organizations are missing
out on right now?

Speaker 1 (42:08):
Yeah, human potential .
I think that right now, thereis so much focus on AI and, of
course, we just wrote a book andwe're doing research on it and
I think there's so much focus onthe technology and
organizations are missing out onand they're investing and
organizations I get it likethey're investing so much money
on the technology they'remissing out on the opportunity

(42:29):
to really develop and supportand leverage the best of our
human capabilities, and that iswhat's gonna enable us to be
able to use these tools well andbe able to get the return on
investment of these amazingtechnologies.

Speaker 3 (42:43):
Yeah, okay, it's getting a little personal.
What music are you listening toright now?
What's on your playlist?

Speaker 1 (42:50):
Oh my gosh.
Okay, that was a real I.
It's so funny.
I have to say that I was justwith a girlfriend over the
weekend and we were laughingabout like eighties music that
we still love to be able to goback to as a go-to, and so I
have to say I'd love to try topretend that I'm hip and current
, but people would laugh at meif I tried to pretend that yeah,
80s, 90s, those are my go-to.

(43:12):
But I love, actually I lovePink these days.
I don't know why.
She just really is inspiring tome and I guess she's current.
So maybe that would be my leadinto modern music tech in this
age, my lead into modern musictech in this age Perfect.

Speaker 3 (43:26):
I'm not going to judge your 80s and 90s because
I'm right along with you.
I was listening to Cyndi Lauperyesterday on my drive Girls
just want to have fun.

Speaker 1 (43:31):
How can you go wrong with?

Speaker 3 (43:32):
that no judgment.
What are you reading right now?
It could be an audio book.
It could be like the old schoolturn the page.
What's on your reading docket?

Speaker 1 (43:43):
I'll tell you what book I just finished which I
just loved.
I just finished Nexus and I amold school, I have tried, I
travel all the time and I triedto use audiobooks and I just I
love actually.
I'm a tactile reader, I justlove being able to like actually
, and so Nexus is a really thickbook, and so carrying it around
has been a real chore, but thatmeans every time I open it up

(44:04):
and I just loved it.
I think that he that I thinkthat he provides such a
fantastic, interesting insighton democracy and information
technology and and justrecognizing some real risks that
we're facing with these newtechnologies and and, of course,
the state of the world.
And so I love books like that,so it's a great read.
Okay, who do you really admire?
Love books like that?

(44:25):
So it's a great read.
Okay, who do you really admire?
Oh my gosh, there's so manypeople that I admire.
As soon as you said that, Iguess that's what Rapid Fire is
all about the first person thatcame to mind is Michelle Obama
she just came to mind but I also, I guess, in my work I've been
so privileged to work withsenior leaders and I could name
so many of them but particularlychief people officers right now

(44:48):
that are really in achallenging position where they
know the future of work, aswe've talked about, is going to
radically change and they needto hold that space where there's
so much fear and, at the sametime, and they need to be honest
, because there are changescoming in terms of workforce
transformation and anyway so Ijust I really admire a lot of

(45:09):
the chief people officers, so abig shout out to all of them
that are standing in this spaceof, at this major inflection
point, work and being able tolead with courage, with care,
but also with clarity and withintegrity and with integrity.

Speaker 3 (45:26):
Yeah, I know HR always has the tough job right.
Because, you're in the sandwichbetween the board and the
employees and what that lookslike.
You're always in the middle,but always with the best
intentions, hopefully, and ifthey read your book, for sure
they'll have some good guidancethere.
What's a piece of advice thatyou want everyone to know?

Speaker 1 (45:50):
I think that was such a good question.
I think lean in.
I think that it is at least inmy career and my life, I've
always trusted my gut, even whenI was afraid, and I always
liked the definition of courageis to step into places that
scare you, and I think thatthere is a lot for us to be
fearful of, whether it's fearfulof social rejection right,
there's so much tension in termsof having a tough conversation

(46:12):
or whether it's concerns aboutwill I have the skills that I
need in the future, and I guess,just yeah, leaning into the
places that scare you andrecognizing that you're not
alone and being willing to havecourage and take risks and I'm
not saying I always do that, butthat's advice I try to give
myself and hopefully maybethat'll be helpful for others.

Speaker 3 (46:32):
Yeah, I think it's good advice, right Like we're in
a time where we're all learning, so now's a good time to have
that courage.
Where can listeners stay intouch with you?
Stay in touch with what you'redoing?
What's the best way to stayconnected?

Speaker 1 (46:45):
Yeah, absolutely.
You can follow me and find meon LinkedIn and please feel free
to reach out.
But also, as I said, Irepresent an amazing
organization, potential Project,wwwpotentialprojectcom and a
lot of the research that Ishared is freely available.
So if you don't want to buy thebook, that's okay, but a lot of
the research we post on ourwebsite and we love to, and you
can also follow us on PotentialProject, where we share, because

(47:07):
this is an ongoing research andinsights and, yeah, a great way
just to be able to keep intouch and reach out.

Speaker 3 (47:15):
Perfect, and we'll link to all of that in our show
notes too, so folks can get easyaccess to that.
Thanks for joining us today,jacqueline.

Speaker 1 (47:29):
Thank you so much.

Speaker 3 (47:29):
I just love this conversation and thank you so
much for both being intentionaland also really future focused
in our discussion today.
Appreciate it.
This episode was produced,edited and all things by us
myself, mel Plutt and FrancescaRanieri.
Our music is by Pink Zebra andif you loved this conversation
and you want to contribute yourthoughts with us, please do.

(47:49):
You can visit us atyourworkfriendscom, but you can
also join us over on LinkedIn.
We have a LinkedIn communitypage and we have the TikToks and
Instagram, so please join us inthe socials and if you like
this and you've benefited fromthis episode and you think
someone else can benefit fromthis episode, please rate and

(48:10):
subscribe.
We'd really appreciate it.
That helps keep us going.
Take care, friends.
Bye friends.
Advertise With Us

Popular Podcasts

Stuff You Should Know
24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.