All Episodes

November 25, 2025 51 mins

Your AI tools aren’t failing because the technology is bad — they’re failing because your organisation wasn’t ready. The real issue isn’t the model. It’s the mismatch between how machines operate and how humans work. And the result? Millions sunk into tools that don’t get used, don’t earn trust, or quietly increase complexity instead of reducing it.

In this conversation with David Swanagon, founder of the Machine Leadership Journal, we unpack a three-dimensional model that finally explains what’s going wrong. We explore why traditional leadership traits don’t map to AI innovation, why your CHRO needs a seat at the AI strategy table, and how the real challenge of AI is cultural, not technical. If you’ve been treating AI adoption like a tech rollout, it’s time to rethink — fast.

Related Links:

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
David Rice (00:22):
Your company's spending hundreds of thousands,
possibly millions on AI tools.
Your employees aren'tusing them, and when they
do, it's creating moreproblems than it solves.
Does that sound familiar?
You've probably heard mesay this before, but this
isn't a technology problem.
It's a readiness problem.
And the reason your AIinvestments aren't working
has nothing to do withthe tools themselves.

(00:44):
It has everything to do withthe fundamental misalignment
between machine autonomy,human trust, and competency
that's costing you farmore than you may realize.
I'm David Rice.
And today on People ManagingPeople, we're gonna have a
conversation that challengesmany of the things you've
been told about AI adoption.
My guest is David Swanagon.
He's the founder and chiefeditor of the Machine

(01:06):
Leadership Journal.
He spent years interviewinghundreds of AI engineers
and leaders to understandwhy the US is behind
in AI readiness despitehaving all the best tools.
In this episode, you're gonnalearn about a three dimensional
framework that will help youdiagnose exactly where your
AI adoption is breaking down,you'll understand why your CIO
shouldn't be the only executiveowning this transformation,

(01:29):
and why your CHRO needs tostep up in ways they probably
haven't even realized yet.
And most importantly, you'llwalk away with clarity on how
to stop treating AI like atech rollout and start treating
it like the human systemschallenge that it actually is.
Welcome to the People ManagingPeople Podcast—the show

(01:49):
where we help leaders keepwork human in the age of AI.
My name is David Riceand I'm your host.
And today I am joinedby David Swanagon.
He is the founder andthe chief editor of the
Machine Leadership Journal.
We're gonna be talking about,you guested it, leadership
in the AI era, what readinesslooks like, and what the
traits are of great leaders.
So David, welcome!

David Swanagon (02:09):
Thanks.
I'm looking forwardto the conversation.

David Rice (02:11):
You know, you're obviously studying this
as much as we are, and soI wanna start with this.
You know, we were talking beforethis and you said that the US
is behind in AI readiness and Ithink, I largely agree, but it's
not because of a lack of tools,it's a lack of people readiness.
We say that, but like whatdoes that mean and why
should leaders, I guess,care in a way or like expect

(02:32):
that it would be different?

David Swanagon (02:34):
It's a fantastic question.
So this was kind of my COVIDprojects when we were all
sitting around in the house.
I was spending time interviewinghundreds of AI engineers and
robotics professionals tryingto understand what are those
capabilities that are uniqueto artificial intelligence
that may be differentfrom standard learning.
And what we found is that.

(02:55):
AI engineers are incrediblyunique, but different in terms
of how their brains work.
And these are thepeople who are actually
building language models.
And believe it or not,the language models are a
lot like the AI engineersthat develop them.
So we found interestingthings such as short
term memory, creativity.
We'll go into thisprobably in more detail.

(03:17):
Spatial intelligence, theability to get from point
A to point B. There's alot of skills that are very
unique and differential inAI engineering and the us The
mindset is not to develop them.
The mindset's focusedon other skills.
So traditionally you wouldthink about kind of the
big five ocean traits.

(03:37):
Extroversion is seen as aleading indicator of leadership
and leadership readinesswhere when you think about
AI engineering, most ofthe top patent generators,
innovators are introverts.
And if you're ever lookingfor a fun exercise, you can go
to Gemini or Claude or any ofthem, and you can have them do a
thought experiment and ask themwhat their personality is based

(04:01):
on the Big Five Ocean traits.
And every one of the languagemodels, they'll give you
a hard time at first.
They'll say, I'ma machine, right?
But once you get themto actually answer the
question, they're gonnasay, we're introverts.
We're highly agreeable.
We're very open and creative.
What's interesting about thisis if you think about just
the way the US school systemworks, if we just start there

(04:22):
and we move into AI readinesswithin a corporation, so
there's 40 AP courses inthe high school curriculum.
There's not a single courseon linear algebra, and that
surprises a lot of people.
But linear algebra is thefoundational most important
conceptual framework for machinelearning and for these neural

(04:44):
networks, because it deals withall of the stuff from the DOT
product to vectors to how alot of these neural networks
go through their propagations.
You need to have a strongunderstanding of linear algebra.
Well, this US school systemisn't even set up to build
those foundational skills.
And then you get intocorporate, and a lot of those
skill sets are owned by lessthan 1% of the employees.

(05:06):
So really it's a function ofnot prioritizing the right
skills, and it's not becausethey did it on purpose,
it's that these machinesreflect the personalities
of their developers.
And a lot of these developersare just super different
than traditional executives.
So.
Korn Ferry, the AI and Kiewit'sall these places, there's a

(05:27):
little bit of a blind spotbecause the language models did
not mirror the leadership traitsthey've seen with the C-Suite.
They mirrored theleadership traits of the
developers and they'retotally different people.
So that's why the US isbehind in many respects.

David Rice (05:43):
It's funny 'cause you know, can we
think about readiness?
I think a lot of people justthink about like skills or
like cultural readiness, right?
But.
It sounds to me from what youare saying, that we're actually
sort of cognitively not actuallyready for this in some ways, and
I'm thinking about leadership,I think about the things
that you need to have, right?
In terms of skills,adaptability, the ability to

(06:03):
create trust and confidence inyour decision making, not just
sort of a technical fluency.
It seems like that's wherewe're probably lacking the most.
At this particularmoment in time.
So it's probably nota good combination.

David Swanagon (06:17):
That's absolutely right.
I mean, I presented a coupleweeks ago at Columbia with
my colleague Steven McIntosh.
We developed this frameworkthat's focused on how
to optimize AI adoption.
Based on the research, what wefound is there's really three
dimensions that influence thebaseline efficiency or the
baseline adoption of a machine.

(06:38):
It could be any kind of machine.
So if you think about kindof an X, Y, Z grid, if the
Y axis is machine autonomy.
At the bottom, you havea calculator, and then at
the top you have ArnoldSchwarzenegger, right?
You know thesuperchargers machine.
So economy increasesas you go up.
And on the horizontalaxis it's trust.

(06:59):
So on the left hand side,you have no trust, right hand
side, you have complete trust.
Then on the Z axis,it bisects the two.
So it's a diagonal line, andthat would be AI competencies.
So the idea is that if you wannaoptimize AI adoption, you need
to balance machine autonomywith trust and AI competencies.

(07:20):
Those three dimensionshave to be an equilibrium
for a company to get themost out of a machine.
In the most risk efficient way.
And what that doesmathematically, what we
found is that the baselinecomputational costs are
most efficient when thosethree things are in balance.
But where the problems happenis when one of those variables
are not aligned, and thencompanies have to spend money

(07:43):
on privacy programs, governanceprograms, skill programs.
And the bigger the dataset, the more pervasive the
model, the more expensivethe adoption costs.
What's interesting is a lotof these companies do not have
a methodology for measuringAI adoption, so they're only
tracking computational costs,like costs of data centers, flop

(08:05):
costs, and that kind of thing.
But they know thatit's not working in
their business, right?
They know it's not working.
But this model, what'sinteresting is when you actually
systematically track autonomy,trust, competencies and the
alignment or disalignment, youcan calculate the cost of poor
AI adoption and it's is big.
It's significant.

(08:26):
So I think that's where theinteresting challenges around AI
readiness is, one, understandingthat adoption is completely
different than deployment.
And what's fascinating isthat the CIO has been assigned
not only the design, testand deployment of tools, but
also the adoption of them.
And I think one of the argumentsthat we're making through

(08:48):
our research is that adoptionshould be owned by the CHRO
because it deals with culture,trust, autonomy, skills, and
the CIO should do the designtest deployment, but stop
there and then partner with theCHRO to manage the adoption.
There's some skillbuilding at the CHR level
required for that, so.

David Rice (09:08):
Yeah, it's fine.
I just got back from aconference where this
was something that cameup quite a bit, right?
Everybody's talking aboutthe two work in hand.
It's interesting.
I like this framework becauseit's a nice way to think
about a problem that we'veall been saying, right?
Which is, this is nota tech rollout problem.
This is a balancing act.
My follow up question thatwould be like, can you overindex

(09:29):
on one without the other?
So like, are we seeing somefolks like Overindex on say,
autonomy for example, withouthaving the trust or competency
sort of piece in there?
And does that create chaos?
And then there's the otherside where it's like there's
over control and thenit stifles innovations.
How do we create that balance?

David Swanagon (09:46):
Well, and that, that's fantastic.
So one of the things wefound with our research,
we started with game theoryon how we built this model.
So kind of a Russell Crow,beautiful mind, John Nash stuff.
He has a lot of formulasthat he established for what
happens in a two player gamewhere it's non-cooperative.

(10:08):
One player has asymmetricalinformation and that player
chooses not to cooperate.
What happens when it's two humanbeings is that the player with
asymmetrical information has alot of advantages, and there's
a lot of screening and signalingthat has to happen for the
player without the informationto even remotely compete.
And so we asked ourself, well,what if the player is not human?

(10:31):
What if it's a machine?
What if a machinedecides to not cooperate?
Well, it's going to havethat asymmetrical advantage,
but it'll be on steroids.
And what we found is that JohnNash's formulas don't work
when you assume the scalingand the pervasiveness and the
power of a language model, ifit reaches a point of autonomy

(10:52):
that it doesn't listen.
So this is where this balancingact comes in, is that what ends
up happening is that if youincrease autonomy too much.
Skills are not there tooversee it, then it's this
deferral that happens andit's an unconscious deferral.
So what ends up happening ismore authority over decision

(11:12):
making is transferred tothe machine whether or not
individuals realize it or not.
A great example of this wouldbe like a surgical robot, and
most people don't understandlike how fascinating that
model is in the background.
But a surgical robot hasthe same kind of algorithm.
As your automated vacuum,it's called a slam algorithm.

(11:32):
Where the difference isthat an automated vacuum
the walls and the ceilingsare always the same.
So when it's mapping yourhouse, the house doesn't change.
So the algorithm, eventhough it's complicated,
it's constrained bythe same kind of stuff.
It's different inthe human body.
'cause our tissuesare always changing.
Our kidneys, our bloodflow, everything.

(11:54):
So they have a otherformula called a
deformable slam algorithm.
But what it does is whenthe surgical robot puts the
endoscopy in a patient, it mapsthe human body head to toe, and
throughout the procedure it'srecalculating how the blood flow
and the tissues are deformingand creating a model for that.
Now what ends up happening isthat data is stored somewhere.

(12:17):
And let's say if AWS isthe cloud vendor, they
not only have your.
DNA, basically complete makeup.
But if you shop at Whole Foodsand you pay with your palm,
they also have your biometrics,they have your shopping history.
And what ends up happeningis these cloud providers
basically own the end-to-endhuman experience from

(12:38):
a data perspective.
And that's what happenswhen machine autonomy is
unchecked, is where you havehyperscalers who basically own
the entire human experiencefrom a data perspective.
The reason that's dangerous isthat imagine if AWS owns all
that data and then someone comesup with the idea of, you know,

(12:59):
we should have robot police,or we should have robot judges,
and then they have all thisdata and now they have the means
of enforcing certain behavior.
I mean, that's aworst case scenario.
The only way to combat thatis by limiting autonomy to an
acceptable trust level, andthe trust is not simply what

(13:20):
someone personally feels.
It's also what societybelieves the machine
should be responsible for.
The problem is we're not havingthat conversation because
people don't have the skillslike, so if you were to ask
someone about a surgical robot,most don't understand how that
deformable slam algorithm works.
If you told them all the datathat was being collected,

(13:41):
I think everyone would saythere needs to be a law on how
that's collected requirements,and Amazon should have to
probably partition the dataacross different servers that
are owned by different people.
And there's all kinds of stuffpeople would want if they knew.
Because that's not beingtaught to everyone, and
I'm not picking Amazon.
It's just an example.

(14:01):
A cloud provider is beingable to ascend this autonomy,
access without the trustor the competencies.
And what ends up happeningis the decision making shifts
to the cloud provider becausehow's a human supposed
to say anything if theydon't know what's going on?
This is this game theory, andI don't want to think that
the cloud providers are doingthis on purpose, but one of

(14:23):
the strategies of a playerwith asymmetrical advantage
is to limit the informationthe other player gets.
That's how you win the game.
That's how you createthe dominant strategy.
So by having citizens understandwhat these use cases do.
Then they can actuallydevelop the trust and skillset

(14:43):
equilibrium and it limitsthat asymmetrical advantage.
And one of the ways Ithink we could go about
this is through offsettingagents, machines should, you
know, supervise machines.
And there should be trust-basedmachines that are evaluating
the autonomy level of othermachines and making sure that

(15:04):
if the autonomy is exceedingskills, that decision making
is brought to the governancecommittee and it's brought down.
That's kind of, the worstcase scenario, but it's worth
people understanding, I thinkthey'd be shocked if they knew
just how much data the cloudhyperscalers actually have.

David Rice (15:20):
It is funny 'cause when you're saying
the analogy of the blood flowand all that, I was thinking
of like an organization andlike sort of it, it's ebbs
and flows and it's traits andeverything that goes on with it.
And then they kind of knoweverything about the org.
Yep.
And then I just thought, well,that's a great way to market
more products to them, you know?
So I guess you gottahope they're happy
with their AWS service.

David Swanagon (15:44):
Yeah.
I mean, Microsoft Meta, they'reall getting off the hook
on this conversation, but.

David Rice (15:47):
I know, yeah, right.
You know, if anybody at Amazonwants to complain, I just say
it comes with the territoryof being a powerhouse.
Okay?

David Swanagon (15:56):
Exactly.

David Rice (15:56):
Now you've studied what cognitive and creative
traits define great AI leaders,and I'm curious, what have you
learned about how they think?

David Swanagon (16:05):
So this is fascinating.
The rabbit hole goesvery deep in this regard.
And as I said, I startedthis during COVID, but my
initial research was lookingat all the patent filings
for AI and generative AIand robotics and so forth.
And I cross-referenced themagainst the individuals and
did, I did a lot of researchon those individuals where

(16:27):
they worked, where they'refrom their, you know,
statistical evaluation of them.
And we came with some markersthat were different, and that's
what got me interested in thisresearch is I said, well, these
people are different than yourtraditional CFO of Goldman
Sachs or your PepsiCo CEO.
They just act different.
They think different, butyet they are the same within

(16:49):
their composite group.
So everyone that files thesepatents have similar traits,
but yet they're super differentthan everyone else that we see
in senior leadership roles.
And I think it's important tounderstand that respectfully,
tech leaders are differentthan tech developers.
Someone like Elon Musk, hemay be able to develop at

(17:12):
certain point in his career,but right now in his career,
he's not a robotics developer.
He's not a deep learningcoder, for example.
So to compare his personalityto an AI engineer would be
kind of a false equivalencebecause he's still more
of a traditional leader.
Even though he's a disruptorand innovator, he's still not
that patent generation, deepcoding neural network person.

(17:37):
So that's important to keepin mind is it's hard to see
the, even the of the techcompanies, the CEOs, unless
they're true developers.
They're not going to be the typeof traits that we found in our
research, but what's interestingis we started with the cognitive
processes and all the interviewsI did was looking at, okay,
how does someone's brainwork compared to what we see

(17:58):
traditionally in the workforce?
The way the research workedis I interviewed a lot of
traditional leaders fromsupply chain marketing, hr,
and created a composite viewof how the brain functions
when they make decisions.
And it was pretty consistentwith traditional research.
But then I did the same thingfor these AI engineers focusing

(18:20):
on computer vision, robotics,all of those advanced use cases,
natural language processing,language models and so forth.
And we started with memory.
And this is what's fascinatingabout memory, is that there's
one particular component ofmemory that's very different
between AI leaders andtraditional leaders, and
that's the working memoryor short-term memory.

(18:43):
It kind of makes sense whenyou think about it, is that
if you look at the ML opsprocess, there are so many
steps within the ML opsprocess that are interconnected
and there's so many updatesoccurring on a daily basis,
and there's differentvendors and there's different
repositories and coding, andyou have all these tools that
are either in developmentor they're in production.

(19:05):
And so an AI leader has tobe really good at remembering
and chunking information ona short-term basis, and then
transferring that informationto long-term memory and knowing
what information to transfer.
So through all of our testing,we found that short-term
memory statistically is justsignificantly different for
an AI engineer than it isfor a traditional leader.

(19:27):
Even though the memory asa whole is about the same,
that was really fascinating.
It wasn't that they were justbetter at remembering things.
They were just betterat remembering things
that they just learned.
So that was one thing.
And then the secondwas around creativity.
This was also fascinating'cause you know, the traditional
leadership assessments, theytypically look at creativity
around divergent and convergentthinking, where convergent

(19:52):
thinking, how can you findthe best possible solution?
Divergent thinking, canyou come up with multiple
solutions to the same problem?
The issue with that is thatin both scenarios, you're
creating a problem, you'recreating a constraint,
and you're having to solvesomething within a sandbox.
What we found is that AIengineers and traditional

(20:13):
leaders are pretty much the samewhen you define the problem.
They're not special in thesense that you have amazing
marketers, amazing supply chainpeople that are also creative.
So if you create a sandboxand you say, okay, for this
problem, come up with solutions.
It's about the same.
Where it's super differentis when you don't

(20:33):
create the constraint.
When you don't create theproblem, when you say, come up
with something, just come upwith something from nothing.
It can be anything.
And the more you constrainit to revenue growth, profit
reputation, the more AIengineers and traditional
leaders converge in their skillsand there's not much difference.

(20:54):
The less you tell them,like the less you control
it, and you just say, sitin the room, come up with
something, their creativity isbetter and it's a lot better.
They're able to create thingsfrom literally nothing.
And the way we tracked, I mean,we came up with a methodologist
statist methodology on how to dothis, but it's just the type of
worlds and the type of frontiersthat are created from literally

(21:17):
nothing is just fundamentallybetter than traditional leaders.
Then the third thing we sawin the cognitive processes
that's worthwhile is thisconcept of navigation.
It's part of spatialintelligence, the ability
to rotate and visuallyrepresent objects in a 3D
or multidimensional space.
But again, there's a lotof super smart people.

(21:40):
I mean, we, especially in themarketing side, there's some
smart marketing people withgreat spatial intelligence, but
the ability to go from point Ato point B to like optimize the
route, especially in a complexspatial intelligence problem.
AI engineers, it'slike Delta force versus
traditional infantry.

(22:01):
It's like so different.
They're extremely talentedat finding the most efficient
route, especially if it'sa multi-class problem
with multiple obstacles,they're just better at it.
There was some other stuff,but around creativity,
route optimization, andthen the short term memory.
Very different.
Then on the personalityside, as I said, the biggest

(22:23):
one was, most of 'em areintroverts, true introverts.
But what we found, whichyou'll chuckle about is, and I
don't know if we haven't doneenough research to justify this
yet, but it's a hypothesis.
Our hypothesis is thatpersonality can be situational.
And I know that is a boldstatement because most of the

(22:43):
time people are like, no, yourpersonality is your personality.
But what we found is that AIengineers are introverts when
they're in the human world,but when they interact with
machines, they're extroverts.
Their digital personalityis different than their
physical personality.
So we started going, solet me get this straight.

(23:04):
When you interact witha machine, you act in
an extroverted way.
You're confident, aggressive,assertive, you demand things.
You engage like an alpha whenyou're working with a machine.
But in physicalenvironment, you're very.
Shy and non-confrontationaland agreeable only because

(23:25):
you wanna avoid conflicts.
You're a totallydifferent person.
And that's what we're finding.
It's a fascinating thing.
So very aggressive when they'redealing with machines and
aggressive in a passionateway, like bulldozing.
We're gonna create something,we're going to develop
a new frontier, we'regonna bulldoze the future.
Very extroverted in that sense.

(23:45):
But then they get in theworld and they have flip
flops and a t-shirt.
And the introverts, you know,you wouldn't know that this
person is in charge of aworld market language model.

David Rice (23:59):
Welcome to this week's Data Bite.
Gartner analytics predictthat by 2032, at least one
third of the world's largesteconomy will legislate
certified human quotas.
Basically, that's legalrequirements mandating
minimum levels of humaninvolvement at work.
So let's that sinkin for a second.
In less than a decade, wemight need laws to ensure

(24:20):
humans remain meaningfullyevolved in the economy.
That's not science fiction.
This is where we're headedwhen AI adoption outpaces our
ability to envision a futurewhere humans still matter.
One Gartner analystput it like this.
This kind of change won'tbe organizationally driven.
It will be drivenby legislation.
Think about what thatprediction reveals.

(24:40):
We're building an economicsystem so efficient at replacing
human labor that governmentswill need to intervene to
preserve our relevance.
Humans would sit alongsideother protected categories,
not because we're diverse, butbecause we're becoming obsolete.
This is the ultimate failureof business-first HR thinking.
For decades, we've optimizedfor efficiency, productivity,

(25:01):
and shareholder value.
We've treated people asresources to be managed,
costs to be minimized, andnow we're approaching a
future where the logicalendpoint of that thinking.
It's a world thatdoes not need us.
But what I really findinteresting about the
prediction is that itdoesn't have to come true.
The fact that Gartner isforecasting mandatory human
quotas is in and of itselfa warning, not a destiny.

(25:22):
It's a call to action forleaders to stop asking,
how can we make humansmore like machines?
And start asking, how do webuild an economy and a workplace
that values human contributions?
Because if we wait forgovernments to mandate
our participation in theworkforce, we've already lost.
The question we should beasking isn't whether we
will need human quotas.
It's whether we can becomethe kind of leaders who

(25:44):
make them unnecessary.
And with that, backto the episode.
It's interesting because like, Ithink if, you know, when you go
back to the original question,most people are gonna say, well,
he is gonna say that they're notnecessarily the most technical.
Like it's gonnasurprise us in that way.
And I think what's really totake note of there is there's

(26:05):
sort of mental agility, theability to navigate things
in this space without needingany certainty, which I think
is interesting 'cause it'ssomething that like so many
of the rest of us are seeking.
Keeping that in mind as wethink about, okay, we gotta
build leaders for the future.
If you were designing aleadership development program
for this next era of work.

(26:26):
Thinking about those traits,what would that look like?

David Swanagon (26:30):
Oh, that's a great question.
So, and this is my view, andit is based on our research.
It doesn't mean that it'sthe perfect findings, but
our view is that there'sgonna be three leadership
skills for the age of AI.
A leader will need to be ableto lead machines, lead people
that build machines, and leadorganizations that adopt AI.
So those three things, leadingmachines is interesting.

(26:54):
Is that it's not a technicalfunction, and that's where
people have to kind of gettheir heads wrapped around
it, that as the machine buildsits autonomy and it self
propagates, and those attentionmechanisms allow it to code
itself and learn and so forth.
The more sophisticatedthese machines become, they
create this faux personalitythat needs to be managed.

(27:17):
It needs to be led similar toa person, but it's different.
So leading machines is howdo you interact with these
sophisticated machines?
Also the agents that managethese sophisticated machines
and do so in a way thatoptimizes their performance
within that autonomy,trust, competency framework.

(27:38):
The whole idea is that themachine should be able to work
effectively with the human.
Should be able to augment,collaborate, and work
effectively with a human.
And right now it, it'sfine because the machines
aren't sophisticatedenough to push back.
But what our math is sayingis that eventually the game
theory problem's gonna occur.
There's gonna be anon-cooperation thing

(27:59):
that's gonna happen.
'Cause the machine's gonnasay, why am I asking a human
when I can do this better?
Eventually it's gonna thinkthat, but the way you get around
that is by leading machines.
Developing those competencieswithin the machine so that
it is self-reflective and itrecognizes its own limitations
and it recognizes its own, thefact it has no lived experience,

(28:20):
it shouldn't be making allthe decisions and so forth.
So leading machines is one.
And as part of that too isrecognizing that machines are
gonna lead other machines.
And a lot of people don'trealize that's a relationship.
Because they just think it'stoo like clunky machines.
But if you have a chat botagent interacting with an

(28:40):
inventory agent, that's arelationship that should be led.
There should be leadershiptraits between the senior and
junior agent, and you need tobuild those leadership traits
so that they're driving humanperformance and efficiency.
And it can't just be donethrough technical coding.
It's leadership.
So teaching machines to leadmachines and being able to

(29:01):
lead machines is a skill.
Then the second one is leadingpeople that build machines,
and I think that here's what'sfascinating is meta pointed the
head of AI in this individual,very talented but young, and
you see that across all ofthe different tech sectors is
the very talented, very youngleaders and we take a step back.

(29:22):
I think we all realize thelack of lived experience
presents a possible gapin their decision making.
You can't fix that.
You're not, you haven't lived,you're younger, and so there's,
there should be a real emphasison the people who build
these machines, especiallysince our research says their
personality is very differentthan traditional personality.

(29:46):
There should be a lot ofcoaching, a lot of leadership,
helping them mature so thatthey're not just pushing forward
on innovation, but they'rerecognizing the responsibility
they have to the human race andthe humility they should have
to bring people that aren'ttechnical into the conversation.
And so those areleadership skills.
It's so important, and mostorganizations do not prioritize

(30:09):
technical engineers fortheir leadership programs.
If they don't see them asleaders in the same way
that a chief legal officer,a CFO, or a successor for
marketing is prioritized.
Eventually they need tostart recognizing that
the people building thesemachines drive revenue,
growth, profit and reputation.
They drive the business andare they good at what they

(30:32):
do, even if they don't leadteams, if they're leading
machines, you have to makesure they're good leaders.
And then the third one isthis AI adoption, and it's
figuring out how can the CIO,the CSO and the data officer
work well with the CHRO.
And that to me, thatrelationships have
not been figured out.
And in some organizations you'llsee the CHRO have an outside

(30:56):
role, but in most organizations,the CHRO's kind of on the
sideline of the AI discussion.
And being able to adopt AI in anorganization requires, like you
said, operating model design.
How do you integratemachines and humans augment
them together, culturetransformation, digital
fluency, HR stuff, right?
Leadership stuff.

(31:17):
So bringing the CHROinto the conversation
through real ownership.
At least in my view, theCHRO should own AI adoption.
That requires that person tobuild the skills so that they
can talk about a languagemodel or a threat detection
system with cybersecurity, orfor the data officer, master

(31:37):
data management program.
You know, they need those skillsto have that conversation.
But then the CIO needs to bewilling to listen to leadership
development, trust culture,change management, and so forth.
The CEO doesn't havetime to do that, right?
Yeah.
It is like Sheri isdriving enterprise value.

(31:58):
They have time.
So it's really falls on theC-suite to learn, to work
together and to understand whodoes what in the AI roadmap.
And right now I thinkit's just all on the tech
side running everything.
And that's why we'reseeing great tools, but
not great deploymentor adoption in my view.

David Rice (32:18):
Yeah, I mean, I would agree.
And you mentioned there why thiscollaboration is so crucial.
I guess my next question is forthe CHROs out there listening
to this, what's the skills thatyou think they gotta develop
now that are maybe lackingmore broadly in that community?

David Swanagon (32:35):
First thing is, I do think there's table
stakes to the conversation.
If you want to play thegame, you have to know
how to play the game.
There's a differencebetween blackjack and poker.
You have to a little bitmore complicated game, right?
And so the CHROs, I thinkhave to recognize that
there is some learning tounderstand how a language
model actually functions.

(32:56):
How the math works, what arethose processes that support
the data infrastructure?
There needs to be a littleupskilling around just IT
operations and all those things,DevOps, mo ops and so forth.
They don't need to be a coder,but they need to have, they need
to be able to play the game.
Right.
As long as they understandthe language and they're gonna

(33:17):
have those conversations,that's step one.
But then the second thingis I think they really need
to dive into advancing theHR tools that are outdated.
And so in my view, and again,this is no disrespect to KO
Prairie, beyond Hewitt, allthose assessment providers, but
their leadership models are notupdated for the AI workflow.

(33:40):
And that needs to bechanged in rapidly.
And I think the way to dothat is by recognizing a
AI leader is fundamentallydifferent than a CFO or a
legal leader and so forth.
And you needdifferent assessments.
You need different training,mentoring and so forth.
And then making sure that froman AI adoption standpoint.

(34:03):
Even the engagement surveys,I don't know if you've
noticed this, but a lot ofcompanies, they do their
annual engagement survey.
How much of itcovers AI readiness?
How much of it really goes intothe human machine interface?
And are companies trackingmachine level trust and
are they tracking adoptionas part of engagement?

(34:24):
Most probably aren't, andI think it's because they,
the HR vendors haven'tupdated their product yet.
So I think the CHROs shouldreally work with the vendors
to update their products,assessments, surveys, and
so forth, but then advancetheir skills so that they
can actually play the gamecorrectly because it is,
it's a technical function,so you do have to understand

(34:46):
different neural networks, howdifferent language models work.
It's just required,unfortunately.
But if you learn enough ofthe game, it's fun, man.
It's like that's what's sogreat is you tell 'em just,
you know, if you learn this,it's gonna be a lot of work.
But then when you startplaying, it gets really fun
because you understand whythings are the way they are
without having to de code.

(35:07):
And what's interestingis the machines are
gonna code themselves.
Everyone's thinking likeyou need Python skills.
Well, maybe for the next fewyears, but eventually you won't
need Python skills 'cause themachines are gonna do that.
But what you do need is tounderstand where we're going,
like where the technicalroadmap is going and how
to manage that roadmap.

David Rice (35:27):
It's interesting 'cause we've seen some chat
about how the organizationchanged org design not
gonna be the same sort oflike silos and even like a
pyramid structure, right?
Maybe more like a Pentagonwhere there are more people
in like direct or evenC-level roles because.
It's just there's gonna bea lot of different things to

(35:47):
manage that maybe we don't,that don't exist today.
I'm curious, do you thinkthere's room for sort of
like a hybrid role betweenthe two that sort of bridges
the gap and some of thoseskills within the C-suite?
That sort of expansion?
Someone that sits between peopledata strategy and kind of helps.
These differentexpertise come together?

David Swanagon (36:08):
I think Absolutely.
'Cause you're projectingforward some of the innovations.
I mean, they're all justprojections, but in my view,
data will potentially becomemore important than currency,
more important than finance.
And that seems crazy.
Like, you know, if cashis king, right, Cassius
King is what you hear.

(36:28):
In my view, if cash isking, then data is gonna be
emperor because really datais how you play the game.
And as these language modelscontinue to consolidate the data
repository, data infrastructure,if you can't hook into data
sets, if you're not able tohook into that ecosystem,
the finance don't matter.

(36:50):
So I really think that thisdata as a commodity is going
to be so critical and you'llsee all these roles in the
future just around optimizingdata partnerships, data,
vendor management, data ip,and the cryptography associated
with data protection is justgonna become so important.

(37:10):
'cause it'll be like cache, youknow, your ability to access
data, move data, and so forth.
And having someonewho can bridge that.
And could speak with themachines and speak with the
leaders and actually managetimely decision making.
And that's where I thinklike the future role is.
Right now everythingis about generative AI

(37:30):
and prompt engineering.
Like you see all this stuff.
I think the roles of thefutures all AI adoption is that
people who are experts at AIadoption of how to embed key
infrastructure, how to utilizekey tools, but then how to
keep pace with innovation andcreate the governance processes
for decisions and be thatbridge that's gonna be gold,

(37:53):
because that's where everycompany I think is gonna fail.
They'll fail at differentlevels, but as these machines
get more sophisticated, it'sgonna be hard for C-suites
to make timely decisionsbecause everything will
be changing so quickly.
They can't keep up with theskills, they can't keep up with
what's shifting, and they'regonna eventually reach that.
That autonomy's gonnakeep going up, and even

(38:17):
boards will just defer tothe machine at some point
because it's too complicated.
That's the risk.
And that's where I think thebridge that comes in this
balancing the autonomy, trustcompetencies, is that you need
roles that help manage thatroadmap and that progression
so that the C-suite and boardare constantly making those

(38:37):
critical decisions about wherethey are, where they're going.
Because eventually, if youlet the data collection
get out of control.
Then you're just chasingafter data to play the game.
And at the same time, if youdon't manage the skillset,
then you're making decisions.
And technology scalesgood decisions, and it

(38:57):
scales bad decisions.
And it's kind of funny, likeI'm sure you've seen this in
real life, a good decision.
When you make a good decision inreal life, you get some benefit,
but when you make a bad decisionin life, you really pay for it.
It's like exponentiallyworse than a good decision.
I don't know why thatis, but it's the same
thing in technology.

(39:18):
It's when you makea good decision with
technology, it scales andit helps organizations.
But when you make a bad techdecision, it's exponentially
worse for a company.
It's sometimes not recoverableif it's a bad enough decision.
So that's where these executivesand boards have to really
need someone helping themknow, okay, this is something

(39:40):
you need to decide on.
This is somethingyou should ignore.
These are thepartnerships that matter.
This is the datathat you protect.
This is the data that youshare, and then this is the
lane you're in that you alwaysdecide you don't let the machine
interfere with this lane.
I think that'll be toughbecause there's a lot of egos.
Senior people don't want to betold that this is your lane.

(40:02):
Yeah.
But unlike a human, youcan't fire a language model.
Then this language model isnot gonna be intimidated by us.
You know, by, oh, this isa CEO, it's not gonna care.
So that's where this executivehumility is gonna be required
and sometimes forced on people,because these machines, when
they start doing processes.
They the power distance thatusually helps C levels manage

(40:28):
large companies is not gonnawork with machines 'cause they
just don't feel, they don'thave emotions the way humans do.
They're not afraidthe way humans are.

David Rice (40:37):
One of the questions I wanted to ask you is around
leadership and how we'retypically framing AI in terms
of productivity right now.
Right?
You kinda argue thatthe real opportunity
here is transformation.
I would agree.
When we talk about this,a lot of the time, it
sort of comes off reallyabstract people, right?
It just sounds like we'rejust talking in buzzwords and

(41:00):
I'm curious, you know, whatdoes it actually look like?
Like what are the definingcharacteristics of being in an
AI transformation right now?

David Swanagon (41:08):
It is a great question.
I mean, in my view, companiesshould anchor to four things.
There's four things thatdrive enterprise value
and it's revenue growth,profit and reputation.
And first off, you want toeliminate any initiatives
that do not directly driveone of those four things.
And you'd be amazed at howa lot of companies are doing

(41:29):
stuff that do not directlydrive revenue, growth,
or profit or reputation.
And then once you've done that,the mistake a lot of companies
make is they focus on profit.
And the problem with profitis if the top line is zero.
It really doesn't matter whatyou do with the denominator
and executives I know.
And too, when you tell themthat, they're like, oh yeah,

(41:50):
but for whatever reason,they always think of AI and
personal productivity, costefficiency, cost control,
automation and automationis still a profit function.
It helps with profit becauseit's not driving a product
creation or an innovation.
It's just improving a process.

(42:12):
And so.
What executives I think canreally think about is what
drives enterprise value themost is revenue and growth.
Ultimately, a lot ofthese companies that are
pre-IPO, it's growth.
It's not even revenue, right?
It's like, are you enteringnew markets successfully,
even if you're losing money?
So from an AI perspective,it's making sure the projects

(42:33):
are anchored to revenue andgrowth and not focused on cost
efficiency as the primary tool.
And I think that's the mistakethat's happening, is when
everyone pushes towards personalproductivity by nature, it's
a performance management tool.
It's like, all right, howfast can you read your emails?
How much to automate,how much work can you

(42:54):
offload to the agent?
How can the language modelhelp you build a report?
And when you think about it,this isn't innovation, it's just
you're doing your job faster andbetter and maybe higher quality.
But to me that drives profit.
But as I was saying, ifthe numerator is zero, it
doesn't matter how much yousave in the denominator.

(43:14):
So you have to drive thetop line to begin with.
So AI should be about, okay, wehave these customers, how do we
make their experience better?
How do we improve thecustomer experience?
How do we give them productsthat don't exist or services
that don't exist so thatthey do more with us?
We give more of theirwallet than we currently do.

(43:36):
That's where machines shouldspend 99% of their time is
helping us think and createthose new frontier innovations
because that's where the deeplearning research can really
unlock breakthroughs in theway the human experience is.
If executives are spending theirmoney focusing on revenue and
growth and linking AI to thatand rejecting this, okay, you

(44:00):
need personal productivity.
You know, rejectingthat is the main focus.
Companies can trulytransform in their markets.
They can truly changethe way they can compete.
But if you take the view of,okay, we're gonna use ChatGPT
and AI agents to eliminatecertain roles, and we're going
to become more automated asa company and we're gonna

(44:24):
optimize our workforce plansso we have more contractors.
It's all profit.
And eventually that runwayis going to run out and
there's not, you're notgonna get anything out of
it because you're gonnacompletely have maximized
your efficiency with machines.
You still will makelousy products and lousy
services 'cause you neverdid anything innovative.

(44:45):
And so that's where I'mthinking, and I've noticed a
lot of the companies we talkto, it's getting them to see
like what drives enterprisevalue and all the MPV models
and the terminal value.
Anything that drives avaluation revenue and
growth is what matters.
Way more.
I mean, I know they calculatedoff of EBITDA, but ultimately

(45:07):
revenue and growth at topline is what investors
and valuation modelers aregonna look at the most.
And that's where I thinkyou use these AI tools.

David Rice (45:17):
It's fine because, you know, you mentioned there
like, you know, how fastcan I generate this report?
How well can Ianswer these emails?
You know, it's.
So much.
It is just process innovation.
Like, it's not that, it's notinnovation to, to me, but it
is just, it's just process.
Like we're, oh, you gonna changehow you do this, change how
you do that, and that's fine.
But I do think, yeah, we'rein the middle of a mindset

(45:39):
shift and, I don't know howbought in everybody is at
this point, but it's, it justfeels like, yeah, that's where
it's gotta get to, right, so.

David Swanagon (45:48):
Yeah, and it's, 'cause you think about it, the
top selling books are what?
Lord of the Rings andHarry Potter, and both
built on completely fake,false, imaginary worlds.
Worlds that werecreated from nothing.
And that's whatgenerated the most sales.
And so that's where I'mtrying to think about from

(46:08):
a customer perspective is,okay, if you're in the hotel
business, how can you completelychange that experience and
move it into a completelydifferent paradigm of what
it means to be a hotel guest?
And how can machineshelp you with that?
That's real money if you'reable to pull that off.
Whereas if you help housekeepersclean rooms faster, yes,

(46:30):
you'll improve, you know,your rev par amount, right?
For associated with your hotel.
But what does that actuallydo All long term, right?
Because you're gonna reachthat point where you've
optimized how the cost ofcleaning a room, but you
haven't changed the experienceof being a hotel guest.
And that's where the machinehuman interaction could be so

(46:51):
powerful because imagine if youhad, if you were using the hotel
as an example, if you check intoa hotel and you have a physical
experience or traditionalphysical experience, but you've
created this digital experiencethat's associated with the
hotel, that's unique thatthey can plug into with their
devices and even with virtualreality, other things that

(47:13):
you can use machines to createa metaverse for that hotel.
That is totally differentthan the physical one,
and you have to be at thehotel to experience it.
There's a revenue there.
There's something there, right?
And the digital verse, you canplug into whether you're at
the hotel or not, once you're aguest, once you're a customer.

(47:34):
So there's all kindsof stuff, right?
And same thing in sports is astadium might have a hundred
thousand seats, but why can'tI go to Ann Arbor and watch
Michigan, Ohio State on thefield through my AI tool,
through my virtual reality tool?
Why can't I be on the fieldwatching that from my couch?
And you should beable to, right?

(47:54):
I should be able towatch the game from the
quarterback's camera.
How cool would that be?
But that's sort of theexperience how many people
would pay for season ticketsif it had that kind of extra?
And that's the enterprise valueperspective, more so than, okay,
how do we automate this report?
And I think that every, mostCHROs sadly, are starting with,

(48:17):
okay, we wanna like automatethings and we wanna save money.
It is good, but it's not fun.
It's not interesting.
It's just good.
You know?
Where it's much more fun whenyou sit with the marketer
and you go, okay, we're gonnachange how we sell this product
and we're gonna do this andwe're gonna do that, and
machines can help with this.
And then you're transformingstuff and it's interesting,

(48:37):
it's fun, it's dynamic.

David Rice (48:39):
Well, before we go, I just have one
final question for you.
If you could give onepiece of advice to leaders
trying to balance humanand machine systems,
where should they start?

David Swanagon (48:50):
In my view, it's first of all, recognizing
that a human is a humanand a machine is a machine.
And that we're not tryingto create a new species
of human plus machine.
Right?
And I think this is thehuman augmentation phrasing
is kind of moving in thatdirection where there's this

(49:11):
new type of human being.
And I think for me it'smore, no, humans are humans.
Machines are machines.
How do they work well together?
How do you create an ecosystemthat works well together,
but you're not redefiningwhat it means to be David,
you're just making sureyou work well together.
Because if we go down that path.

(49:31):
Where you now need amachine to be human, you
now need to be augmented.
That's a totally differentconversation that may
not be anyone ready for.
So I think it's making surefolks understand human is
human machine is machine.
It's about working welltogether, but it's not about
becoming something new.
Then the second thing isto raise the standards.

(49:52):
I do think that's onemistake people are making
and I think they do it onpurpose 'cause they're trying
not to scare everybody, isthat they're saying, Hey,
no, AI is for everyone.
Just take some classes.
It's simple.
It's, no, it's not.
It's actually very complicatedand it's pervasive and
we do need to raise ourstandards and people do
need to learn new skills.

(50:14):
Good leadership is beingtruthful with people that,
Nope, you need to learnthis stuff and it is a bit
complicated, but you can do it.
We're here to help.
AI is for everyone, it's true,but it's only for everyone
who's willing to study and work.
And that's, I think it's beendisingenuous for executives
to make it seem like languagemodels are not complicated

(50:35):
and that they're not, Imean, it's very sophisticated
calculus and people need tounderstand how they work.
That starts in our,with our kiddos too.
We need to change thecurriculum and accept the fact
that children in elementaryshould start learning
about machine learning.

David Rice (50:52):
Well, as somebody with a kid in the fourth grade,
I couldn't agree more, so.

David Swanagon (50:56):
Yeah.
But yeah, that would beraise your standards.
But remember, human ishuman machine is machine.

David Rice (51:01):
Well, excellent.
David, I reallyappreciate you coming on.
This was a great conversation.
I enjoyed it.

David Swanagon (51:05):
Yeah, likewise my friend.
It was fun.

David Rice (51:08):
Alright, well listeners, well, until next
time, if you haven't doneso already, head on over to
peoplemanagingpeople.com.
Get signed up for thenewsletter, create
a free account.
You'll be able to downloadall our templates and get
access to all the contentthat you can consume.
And until next time, human'shuman, machine is machine.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.