Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jorge (00:00):
The idea that these
technologies can be used to make
us wiser, more compassionatepeople is really exciting,
particularly if the people we'retalking about are leaders whose
decisions impact so many otherpeople, right?
Narrator (00:23):
You're listening to
Traction Heroes.
Digging In to Get Results withHarry Max and Jorge Arango.
Jorge (00:34):
Harry, great to see you
again.
Harry (00:35):
Hey, good to see you,
Jorge.
It's always a pleasure.
Jorge (00:39):
It's a pleasure to see
you as well.
How have you been?
Harry (00:42):
I've been great.
I've been traveling a littlebit, really enjoying getting out
of my regular routines andpatterns and a little bit of
the,"I got to work and I don'tknow how I got here'cause I was
daydreaming." And it's hard todo that when you're traveling.
It's fresh and new experiences.
(01:03):
Puts me on alert that i'm in anew and alive environment.
Jorge (01:09):
Oh, fantastic.
I would love to talk more abouttravel, and I have some travel
coming up too, but I've actuallybrought a reading that I would
love to share with you and maybehave a little bit of a
discussion about.
Harry (01:24):
I can't wait.
bring it on, man.
Jorge (01:29):
Alright.
In the history of work.
We humans have moved through theagricultural, industrial, and
information ages and have nowentered the age of augmentation.
In all the past ages, the toolswe used were passive like a
shovel to dig a hole or an emailsystem to share information,
(01:52):
these tools were dormant untilwe chose to use them.
But with AI, we have moved intoa new era where our tools are
actively interacting with us inways that change how we perceive
and engage with the world.
"Instead of waiting to be used,generative AI tools are
listening, analyzing, learning,and predicting what we want or
(02:13):
what we need.
AI can now read and write forus, identify our strengths and
weaknesses, and shape or curateour media diet.
It influences what we learn,think, do and say.
This advancement has moved usfrom the information age to the
age of augmentation.
"What is augmentation?
(02:35):
Augmentation is the process ofimproving something by adding to
it.
Whether you realize it or not,you're already augmented in many
ways.
For example, how many phonenumbers of friends can you
remember?
Probably not many.
Why?
You now use your smartphone toaugment your memory.
"The key principle ofaugmentation in the context of
(02:57):
AI and leadership is to adopt aboth and mindset.
You must leverage both the powerof AI and your most human
qualities." And now I'm going toskip a bit.
"To enable good synergisticcollaboration between AI and
humans, we must understand howthe mind operates.
(03:20):
From a cognitive scienceperspective, most models of the
mind describe in different wordsthree key qualities, perception,
discernment, and response.
Everything we think and do isfiltered through the cycle of
neurological qualities.
Therefore to understand how weas humans get the best of the
augmentation with ai, we mustunderstand these neurological
(03:44):
qualities.
In the context of cultivatinggood leadership, we call these
three qualities, awareness,wisdom, and compassion." I'm
going to stop there.
Harry (03:58):
Wow.
I have not, I don't believe I'veread that, and I'm super curious
to hear what it's from.
Jorge (04:05):
So this is from a fairly
new book.
It's called More Human, the andthe subtitle is How the Power of
AI Can Transform the Way YouLead.
And it's by Rasmus Hougaard Idunno how to pronounce this
Rasmus Hougaard and JacquelineCarter with Marissa Afton and
(04:34):
Rob Stembridge.
This is fairly new, it wasreleased in March of this year,
I believe.
Harry (04:40):
Okay.
I love the orientation to unpackaugmentation and what it means
to be in relationship with thetechnology in a more active way.
I almost wondered if it was anauthor by the name of Nicholas
Carr who wrote a book called TheGlass Cage.
(05:03):
I don't know if you've readthat.
If you're a designer or adesigner type, Glass Cage is
about designing systems andinteracting with systems.
The principal metaphor in theGlass Cage is all the electronic
interfaces inside a cockpit ofan airplane, for example, and
(05:28):
our relationship to those andwhat the book that you've just
introduced is not just this morepassive relationship, but a more
active relationship where theconversation is more
intentional.
And we have to be more alert towhat our relationship with this
(05:50):
technology is because it's nolonger just about our point of
view.
It's an augmented point of view.
Jorge (05:58):
There's a couple of key
ideas there.
One idea is this notion that,for most of human history, the
tools that we've created to helpus augment our capabilities in
various ways, right?
And they talked about thesmartphone as a way to augment
your memory, right?
(06:18):
Like you don't need to rememberyour friend's phone numbers
anymore.
You and I are both wearingglasses right now, and I have
astigmatism and if not for theeyeglasses, I'd have a hard time
reading what's on my screen.
So this augments my visualabilities.
But I think that the first keypoint that they made there is
(06:39):
that we now have this newtechnology that is on a
different order because all ofthese other technologies that
come before are in some way,passive in that we reach out to
them to do stuff.
You use your contact list inyour smartphone when you're
(07:01):
wanting to get your friend'sphone number.
Whereas AI is more proactive,and this idea that it's
monitoring our activities andsurfacing things for us without
us necessarily asking for it.
And I think it's a little bitaspirational.
I think that they're describingsomething like an agentic
(07:21):
system, which as this recordingare few and far between, but the
possible capabilities are there,right?
That's one key distinction thatthey introduced there.
Another one, and for this, I amgonna have to go back to the
book, another one is that it'sone thing to augment our memory
(07:44):
by writing down phone numberseither on your smartphone's
contacts app or in a littleblack book, right?
Like, those are allaugmentations of your memory.
It's one thing to augment yourmemory, and it's another thing
to augment your mind writ largeof which memory is a part,
right?
And, this might have comeacross.
(08:05):
in the fragment that I read,this book is specifically
written for people in positionsof leadership who are looking to
use AI to augment their minds.
And they say here that there arethree particular qualities of
the mind that we need to be onthe lookout for.
(08:27):
They talked about perception,discernment, and response.
And they mapped those to threequalities of good leadership.
And the qualities wereawareness, wisdom, and
compassion, which I think arethematically related to other
(08:48):
things we've talked about inthis podcast.
Harry (08:51):
I'm now drawing the
connection between my comment
about the Glass Cage and NicolasCarr and the reading that you
had, because of course, whenI've walked away from the glass
cage, the concept, becausethat's very much about
autopilots, right?
It's about being able to turnover as an as pilot, which is a
(09:12):
leadership position to turn overcontrol to a set of systems that
are now very much informed byvery sophisticated I don't know
how much machine learning hasgone into an autopilot, but
certainly the type of patternrecognition, sense and response
(09:34):
mechanisms, all of that is builtinto being able to keep a plane
up in the air and flying whereit's supposed to be.
And the concept that I walkedaway with was, as a designer and
as a user of these systems, youhave to be very attuned to
learned complacency.
And that this notion of learnedcomplacency is...
(09:58):
Nicholas de Carr talks about thealert systems have to be
designed in such a way that youjust don't end up ignoring them
after a while because they'retrying to communicate with you.
And it seems AI and I'mcertainly involved in a number
of AI projects right now, Ihaven't done any autonomous
agentic work, but before that,working on developing agents and
(10:20):
systems to produce agents insideof enterprise environments.
And also on the more creativeside, these are systems that,
they...
it's not just about you set itto do something and it goes and
does it and then tries to warnyou if it's not working
properly, it's doing stuff onyour behalf and it may have
(10:43):
permissions that you've given itand rights to go interact in the
world or in parts of the worldelectronically on your behalf.
And it really requires a verydifferent kind of awareness and
a much more active type ofdiscernment.
It can't just be a subconsciousdiscernment where when things
(11:03):
show up, try to discern them ina"wise" way and then respond to
them rather than react to themin a more resourceful way.
This is a situation where wehave to be much more attuned to
having bits and pieces ofourselves out in the world
acting on our behalf andgenerating triggering events and
(11:29):
activities out in the world thatwe then have to be alert to and
then have to discern and thenhave to respond to.
And so, it's much more...
that notion of augmentation, Ithink about it in a musical
sense, but now it's it's almostlike extended augmentation.
It's like, it's not just, I havenew superpowers, it's like those
(11:52):
superpowers are being releasedinto the world and now I have to
pay attention to what'shappening in the world beyond my
immediate reach now because it'spart of me and part of how I'm
interacting in the world.
Jorge (12:05):
One of the theses of this
book, and I don't know if it's
explicit or if it's just one ofthe things that I got from it,
is the notion that, for somebodywho is in a position of
leadership, and the reason Ibring leadership to the table
here is that if you want to havetraction at scale, that's really
(12:26):
hard to do on your own.
Like, eventually, you're goingto be working with other people
and if we have this augmentationtechnology that can improve our
abilities to perceive, improveour ability to discern things,
explore different ways ofresponding.
(12:48):
If we agree that those are verybroad characteristics of the
mind that a leader needs to workon, then we can use them both to
become better leaders, but alsoto open up space by getting rid
of the drudgery, like theaspects of those things that are
(13:11):
rote or that can be delegatedmore effectively to an AI, to
open up space for our humanity,to bring forth our humanity.
And I think that the authors arepretty clear-eyed on what AIs
can and cannot do.
For example, there's a wholesection on empathy.
(13:32):
Empathy is not something thatAIs can do, empathy is something
that human beings can do.
Compassion is something thathuman beings can demonstrate.
And, to me, the idea that thesetechnologies can be used to make
us wiser, more compassionatepeople is really exciting,
(13:55):
particularly if the people we'retalking about are leaders whose
decisions impact so many otherpeople, right?
It just feels like a positivemessage.
Harry (14:07):
There's maybe an
interesting angle on this that I
hadn't really considered before.
One of the questions that alwayscomes up is, what is the only
thing a leader needs?
And my answer to that is,followers who are willing and
able to follow.
But not just willing and able tofollow, but actually do follow.
(14:29):
And in this regard, if you lookat AI, especially agentic,
solutions.
Willing?
Yes.
Able?
Sometimes.
Do?
Yes.
Part of what's happening now isit's not just for leaders.
It's anybody who is leading orwants to lead can now mobilize
(14:56):
followers.
Those followers hopefully are...
there's humans and humanity inthem.
But there are also increasinglyagents on behalf of somebody who
wants to affect change and getsomething done.
And to be able to mobilize doersand maybe at the discreet task
(15:20):
level or at the orchestratedtask level to get stuff done.
I think part of what I'm takingaway from this is, maybe we need
to rethink a little bit what itmeans to be a leader.
Because if you, as a person inthe world who wants to affect
change, if I'm able to mobilizenot just people, but now, doers
(15:49):
on my behalf, then it becomes aquestion of how creative can I
be with respect to those agentsand what do I mobilize them to
do and how do I release them inthe world on my behalf?
So maybe in some ways, thishistorical distinction between
(16:10):
followers, management, leaders,so on and so forth, is starting
to blur as we become morecapable of wielding...
I don't know how to talk aboutit like this because they're in
effect.
I have more agency with agents.
(16:31):
You know what I'm saying?
Jorge (16:35):
Yeah, for sure.
Well, I say"for sure" as thoughthe distinctions are clear.
They're fuzzy, to your point.
They're fuzzy.
I do think often about thedistinction between leadership
and management.
I do think that those aredistinct things and I've seen
many articles suggesting thatmanagement skills are going to
(17:00):
be very important in a worldwhere AI agents are a thing.
Many of the basic things thatsomeone needs to do when they're
a manager of humans map over tothe skills needed to effectively
work with agentic AI systems.
(17:21):
But I don't think that's thesame thing as leadership.
And I don't know that we need toget into defining what
leadership is.
I wouldn't be able to do it onthe fly.
But, but I think that there's anaspect to this that is earned,
like leadership is earned.
And your quote about, what doesa leader need?
Followers.
(17:41):
That's it.
People follow you because theybelieve that you can lead them
effectively.
And we all lead in differentsituations.
I was just thinking like my wifeand I, take turns leading the
family in different situations,right?
(18:01):
Sometimes our kids take turnsleading the family, based on
whatever situation is happeningwe can choose to follow them,
follow what they're asking us todo.
But I think that the kind ofleadership that we're talking
about here, and certainly thekind of leadership that this
(18:22):
book is written for, are peoplewho are leading organizations
and teams in organizations, whoare trying to get them to maybe
a different place, who aretrying to get them to perform at
a different level.
And again, I don't think thatkind of leadership is granted.
(18:43):
It's something that is earned.
And to earn it, you have to workon it.
And I love this idea that thecore"work on it" consists of
working on your mind.
I think that is right, that thebasic skills that you need to
(19:05):
work on, or not basic, the mostfoundational skills that you
need to work on are the skillsthat have to do with your mind.
Harry (19:15):
I'm really excited to
read that book and I wouldn't be
at all surprised if there'ssomething in that book somewhere
that I bring back to one of ourconversations because it's
really rich.
And it's prompting me to thinkabout what does it mean to want
to lead as an agent of change inthe world where, if you buy that
(19:39):
all the only thing a leaderneeds is followers who are
willing and able and actually dofollow, then it becomes a
question of where does thatstart and what is the path to
leadership?
Because you can start small andyou can go big, and to the
extent that you're able to dothat in a somewhat intentional
(20:00):
way and grow that followership,as it's being earned, as you are
empathizing through the worldand demonstrating compassion and
responding rather than reactingin a powerful and human way to
create the kind of world that wewanna live in.
I think thinking these thoughtsand asking ourselves these
(20:22):
questions is really importantand it's easy to inadvertently
put our head in the sand whileall of this AI work is happening
because it is changing soquickly.
Jorge (20:33):
You know what's funny is
I came to this book because I
was in a conversation when,where someone mentioned it as a
good book for leadership and AI,and I came to it primarily for
the AI bit.
But, I guess it was a surpriseto me and a pleasant one at
that, that they were like, youreally need to look at these
(20:55):
foundational things.
And I'm saying this because thisis not these author's first
book.
They have a previous book calledThe Mind of the Leader.
I haven't read that, but Isuspect that book is more about
this framework where it's youneed to really unpack your mind
if you're going to do thisright.
Harry (21:13):
Oh, I'm gonna go read
that book first.
Jorge (21:15):
I was gonna say, it's
like maybe you tackle that one
and you bring that one to ourconversation
Harry (21:20):
Yeah.
Jorge (21:21):
That might be up your
alley.
And in the meantime, I stillhave to hit the books that you
recommended in our lastconversation, the Pirsig books.
Maybe I'll bring one of those towork to one of our future
conversations, Harry.
Harry (21:38):
It's funny, it feels like
it's all getting very focused on
a set of themes here.
I think you've brought this up anumber of times, but at some
point we ought to step back andlook at what are those themes
and what are the key points thatunderpin those themes and look
at the foundational ideas here.
They're coming out on the calls,I just wonder what it would look
(21:58):
like if they were all distilled.
Jorge (22:01):
That's a great prompt and
it's something that we might do
when we have more of theseconversations in the can, we
might actually put them throughan AI and say,"Hey, what themes
you find here?" But and I saythat in kind of half jokingly
because I think that we arehaving these conversations at
the right time in that I agreewith the authors of this book
(22:25):
that AI is a different kind oftechnology in that it does
prompt these questions about,"how does the mind work?
And how does my mind work?
And can I do that better?" Like,we are in a very introspective
moment with regards tocognition.
So I think that having theseconversations about traction and
(22:49):
discovering that so many of themare pointing to inner work is
not a coincidence.
Harry (22:55):
Yeah, it's pretty
wonderful.
One of the folks I'm workingwith, a CTO at one of the
companies that I'm a fractionalexecutive and executive coach
on, talks about the LLMs asbeing just a super fancy auto
complete, and that's not myexperience of it.
So that while it might be true,it is not at all my personal
(23:18):
experience of it because what itmay be auto completing are
thoughts I haven't had yet, andthat doesn't seem like auto
complete to me.
Jorge (23:29):
We might get into that in
greater depth in the future
conversations.
But, for now, once again, i'vehad a great time talking with
you.
Harry (23:39):
I really appreciate you
making the time.
What a great read.
That was really cool.
Thank you.
Narrator (23:46):
Thank you for
listening to Traction Heroes
with Harry Max and Jorge Arango.
Check out the show notes attractionheroes.com and if you
enjoyed the show, please leaveus a rating in Apple's podcasts
app.
Thanks.