Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:09):
And welcome back to truly Significant dot Com presents success
made to last legends. I'm Rick Tokeeny along with our
LA correspondent Jennifer Tokeny, who leads many Moons Entertainment. We
have a great guest on today, Kate O'Neill, and she
(00:30):
is the renowned tech humanist. What attracted us to Kate
were many things, but she's got this futuristic mind that
sets her apart. And she asked questions about design in
ecosystems and how people are affected by the ongoing innovation
(00:52):
and of course the surge of AI. She was an
early Internet pioneer, building the first intranet for Toshiba America
and became Netflix's first content manager. Today she advises some
of the great global companies on using emerging technology for
(01:12):
sustained growth. And I absolutely love the way you approach alignment, Kate.
I am a student of alignment for growth because of
one of my mentors is doctor Fuleet Boissu, who worked
for Steve Jobs, and all they talk about is alignment.
(01:32):
So I mean, you were just a wonderful guest. Welcome today.
Speaker 2 (01:38):
Thank you very much for having me on here. Boy,
it's such a great setup, and I love that opening too,
of the alignment focus, because that's a huge part of
what I bring to my thinking around all of this.
Speaker 1 (01:53):
Would you yes, go ahead explain.
Speaker 2 (01:58):
Yes, Okay, Well it will be by way of giving
you a bit of background, because I'm a linguist by education,
and so I was a German major, a Russian and
linguistics double minor, and a concentration at international studies in
my undergrad so tolefic background for technology. But it was
(02:19):
through just a miracle of timing. I was in college
when the web came about, and I was supervising the
language laboratory, as one does, and the web came about,
and I found out I could build a website for
the department for the language lab. So I built it,
and it turned out to be one of the first
websites at the university, the first departmental website at the university.
(02:42):
And it was it at a time when people could
make manually curated lists of what websites were new that day,
which is so mind boggling to even think of. And
that got me noticed by a guy at Toshiba. We
started a correspondence. That's how I got recruited to Toshiba.
So this is all backstory, but all of this to
say that linguistic background informs one thing. I really think
(03:06):
about language as the architecture of so much of communication.
It is the original human technology, and it also gives
us a good hint about what meaning is. So meaning
is about this relationship between what it is you're trying
to communicate or what it is you have in your mind.
(03:27):
Like when we think about linguistic meaning communication, you're trying
to communicate something the other person on the other side,
here's something, and then there's the message that's actually in between.
And anywhere there's shared overlap between these three components, you've
actually gotten meaning, but only where they're shared overlap. So
that alignment piece is huge when you think about language
(03:48):
as a form of communication, when you think about language
as a metaphor or meaning as a metaphor for anything
that's significant or anything that matters in the world at all.
So that has to do with marketing, that has to
do with strategy, It has to do with how we
think about organizing the world in general.
Speaker 3 (04:03):
So there you go.
Speaker 2 (04:04):
There's my sort of opening ramble about some of these
concepts and how they fit together.
Speaker 3 (04:10):
For me.
Speaker 1 (04:10):
So, okay, so take it away.
Speaker 3 (04:12):
Where should we go from.
Speaker 1 (04:13):
Theturistics expert guru? Tell us where are you from originally?
And a little bit more about your education?
Speaker 2 (04:24):
Yeah, sure, So I'm from the Chicago area originally, and
I was always interested in languages and technology. In fact,
there's a story it sounds apocryphal, but it's actually true
that in first grade I won two statewide competitions. One
was a young Author competition and one was a young
Programming competition. So it was sort of a pretty good
(04:48):
indicator early on that I was going to always be
interested in technology and the written word, but more broadly
like how we communicate with one another? How do connections
with one another? And so that story sort of resonates
throughout my career in a lot of different ways. I
then went from, like I mentioned, I was in college
(05:10):
when the web came about, went on to Toshiba, lived
in the Silicon Valley for a decade during the nineties,
which of course is just an incredibly formative time in
Silicon Valley as the place we know it now to be,
and that's when I went through a series of startups.
After Toshiba landed at Netflix as one of the first
(05:31):
hundred employees, as you mentioned, I think, and from there
there's another series of great jumps into different enterprise organizations
Hospital Corporation of America once I moved to Nashville, which
is the largest for profit healthcare company. There's just been
a whole lot of really interesting dots in this pixelated
(05:51):
picture of my career, and the fun of it is
connecting them and figuring out where these patterns are that
would otherwise seem a little bit random. But there's a
lot of real interesting synergies and synthesis today.
Speaker 1 (06:06):
I love it. Okay. And as aforementioned, Jennifer, representing the
entertainment industry, LA based, lives and dies by the California
in her industry, I'm going to let her jump in
and ask you a few questions from her sector, and
then I'll guide this thing back over to the general world.
(06:29):
So fire away, Jennifer.
Speaker 4 (06:32):
As the tech humanist, how do you believe the leading
streamers are shaping the modern human experience? Do you think
it's positive or negative?
Speaker 3 (06:41):
I think it's a mix, right.
Speaker 2 (06:43):
I think a lot of people these days are much
more aware of how algorithmic experience plays into our lives.
Right at the time when I was there, I helped
develop one of the first algorithmic website design. That was
one of the projects I was involved in was redesigning
(07:03):
the homepage so that it used more algorithmic kind of
clues and cues to give you more personalized information like.
Speaker 3 (07:11):
We've got this movie and you'll love it.
Speaker 2 (07:13):
And that was great at the time, and I think
what we've seen in the twenty five years since then
is we've gone all the way that way. And now
I think people I hear are more and more, especially
younger people, but across generations people are kind of pushing
back to just a little bit and going like, I
don't know if I always want algorithmic recommendations. I don't
(07:34):
know if I always want to be in the filter bubble.
You know, I don't know if I don't want to
just randomly encounter something. But that's that's an interesting byproduct
of it. And I think also the phenomenon of binging
and streaming has been you know, maybe a mix on society.
I don't know that we all benefit from the watch
(07:56):
next button automatically deploying on our behalf and keeping us
engaged in hour's worth of programming but it's been I
think there's been a lot of also some benefit to
the disruption of how many decision makers are involved in
getting getting some diverse voices out there, getting some new content, uh,
(08:19):
shaking up that the sort of old players in the market,
and making it possible for you know, some new formats
to be discovered and some new approaches to be seen.
Speaker 3 (08:30):
So mix would you what would you say, Jen?
Speaker 4 (08:34):
It seems like there's just infinite choices and you end
up going in and scrolling for the amount of time
that it would have taken to watch a TV show essentially.
But I think for me personally, I like that it
identifies specific things for me to watch in different genres
and things like that. But I know it can be overwhelming,
(08:55):
but I do think that it can be helpful. So
I think technology is helpful and it's catering to different people,
but it can definitely be overwhelming.
Speaker 2 (09:04):
So one of my favorite stories from landing at Netflix
as one of the first hundred employees as the first
content manager was I took over the movie database. So
this is the database of all of the movie listings
and all of their metadata and everything.
Speaker 3 (09:18):
What I saw right away is.
Speaker 2 (09:20):
That every movie could only have one genre, and I
think if you know movies at all, you know immediately
that that's a big problem.
Speaker 3 (09:28):
That's going to be a limiting factor.
Speaker 2 (09:30):
So I got approval to, you know, re engineer this,
and I worked with pretty much everybody in the company.
Otherwise I've said it was one hundred people, so it's
not really that many. But it was engineering, it was product,
it was marketing, it was everybody, and you know, we
recoded the database relationships so that movies could have more
than one genre. And so that was a really big undertaking.
It was huge, ugly work. But what it meant is
(09:54):
that in the twenty five years since we did that,
Netflix has used uh, machine learning really effectively to discover
these really interesting patterns between movies and between your viewing habits,
my viewing habits, you are viewing habits, and to be
able to kind of connect these dots in ways that
would not have been possible prior to machine learning, but
(10:16):
also would not have been possible without cleaning up that
genre relationship, because now you can get these surface like
quasi categories that now show up in your home feed
like mine, say things like Oscar winning twentieth century period
pieces or you know, award winning movies about friendship, or
you know, things like that that are not genres in
(10:38):
any classical sense, but that use that same architecture. So
I think it's a really interesting combination story of how
it is that you know, you can do to digital
transformation to make a company more ready for the future,
and how machine learning slash AI can create more rich
(11:00):
human experiences so that we can all enjoy and benefit
from them.
Speaker 1 (11:04):
Right, it's fascinating. I Uh, what do you think, Rick,
You've got my mind thinking. You you on frame innovation
around regenerative value. Procter people are all about creating value.
(11:24):
And if all was, if everything were said and done
to describe our careers there, it was to improve lives
and change lives. And so that's what that's what value
means to us. So I would left for you to
share an example of the system, either past or present,
(11:47):
that you've successfully grown, restored, depleted, or done something to
the human resource.
Speaker 2 (11:54):
Yeah, that's a it's a really beautiful question. It's one
that has my mind going in a lot of directions.
Speaker 3 (12:01):
I think these.
Speaker 2 (12:02):
Days when we think about regeneration. I did a whole
talk actually about Regeneration for Thinkers fifty in the lead
up to so they have their gala in November, in
which I became a ranked thinker, So that's really exciting,
thank you. And in the couple months prior to that,
(12:23):
they did a virtual event that was the lead into
the gala, and I did the closing remarks there and
I think they're still available online. Ten minutes talk on
regeneration as a strategic theme, and the thing is that
there's so many ways to parse it. You can think
about regeneration in this sort of climate ecology sense, of course,
(12:43):
and I think it's really important that we do in
terms of sustainability, but you can also think about it
in terms of technology and in terms of human experience.
And one of the things that I tried to do
in those remarks was tie some of those thoughts together
in the sense that regeneration ultimately, well sustainability really is
(13:05):
about doing more with less on some level, right, And
regeneration is the promise of once you have tried to
do more with less, like reinvesting the gains from what
you've saved into a sustainable system, that then you theoretically
are building something that should be self perpetuatingly sustainable, right,
(13:29):
like you're growing something that's increasingly sustainable. And I think,
what when we look at technology, we can see there's
been you know, these kinds of rules and laws like
More's Law and the series of laws that have come
along after that that have shown us how to how
we can think about chipsets and energy in sustainable ways,
(13:51):
Like we're going to get more out of each chip
because it's going to be able to do more as
we go, We're going to be able to put more
circuits on each chip, you know, two and a half
times every two years or whatever the More's lawn ratio is.
And then all of these other laws that have shown
us we can do more with energy, we can do
more with processing power. And yet what we're also finding
(14:14):
is there's this a corollary that suggests that we can't
keep up that the more sustainable we get or the
more energy efficient a system becomes, the greater the demand
for energy becomes. And so we have a human problem. Really,
we have a demand problem. We have a greed problem.
Speaker 3 (14:35):
We have a.
Speaker 2 (14:37):
System design problem. And I think that's where you know,
when you think about the P and G folks, and
you know, folks who are listening, who are thinking strategically,
who are thinking organizationally, who are thinking holistically. There's this
incredible opportunity, i think, for leadership in the organizational sense
to set a new model, to set a new proces
(15:00):
for how to integrate these ways of thinking about regeneration,
about climate responsibility, about human centricity, you know, creating meaningful
human experiences, and about using technology in a responsible stewardship
sort of way, so that we are moving into a
(15:22):
really exciting future. The technology that's here and that we
see on the horizon is exciting, crazy, exciting, and also
plenty of people are wicked scared of what they're seeing
coming out of this technology and out of the decisions
that surround the technology. Let's be clear, because it's not
the technology that's scary. It's the decisions that people are
(15:45):
making with the technology. It's the decision to cut jobs
and use AI instead. It's the decision to make deep
fakes and try to manipulate people into avoiding democracy. It's
the decision to do, you know, manipulative or destructive things
with technology.
Speaker 3 (16:02):
That's the problem.
Speaker 2 (16:03):
The technology itself is not the problem, just as the
technology is not itself.
Speaker 3 (16:08):
Good or bad.
Speaker 2 (16:10):
We made the technology. Technology is neutral, it is. I'm sorry.
What I mean to say is technology does not itself
carry the value it's It's not, but it is not
inherently a neutral thing either, because it is imbued with
the actions and the values that we imbue it.
Speaker 1 (16:30):
With Jennifer and perhaps her industry feels the greater threat
of AI versus ME coming from the CpG consumer packaged goods.
I'd love for you, Jennifer, if you want to add
any color comment to that, because you are in the
trenches with your entertainment company and you know what's happening
(16:52):
out there, because a lot.
Speaker 4 (16:55):
Of Hollywood is now tech companies. How do you keep
grounded with these new tech companies coming in taking over?
Is there a way to kind of stay grounded with
all the change?
Speaker 3 (17:15):
Yeah?
Speaker 2 (17:15):
I think if I if I interpret where you're going
with that or what you're what you're thinking behind it
is there's a level of the entertainment industry is predicated
on human creativity on some level, right, and there's an
element of that that is legitimately subsumed into AI. Right,
(17:40):
Like we understand that AI, generative AI and other AI
models have been trained on human creative output writers like me,
My author my authorship has been subsumed into data sets
that train AI, and so now generative AI tools can
write like me. This is not probably not be something
(18:01):
that I advertise that you can just ask AI to
write like Kate O'Neill, the tech humanist, and you will
get a pretty basic, impassible sort of representation. I think
the more important thing, and so I've actually participated in
United Nations AI Advisory Body discussions about IP protection and
(18:22):
how we're going to validate certain kinds of you know,
creative assets as being authentic or valid or you know,
signed official or whatever the case may be. That solution
is does not exist yet there are many possible solutions
(18:44):
in the meantime.
Speaker 3 (18:45):
I think that.
Speaker 2 (18:45):
The the music industry, of which I was an adjacent to.
I was in Nashville for ten years. I moved there
as an aspiring songwriter alongside my tech work, so I
certainly had some connectivity to the creative industries there and
my background with the movies and as a writer. I
(19:08):
have a lot of sympathy for the fact that what
we're doing to the entertainment industry is incredibly challenging to
have it pivot and figure out like we you know,
we were just disrupted by streaming and all of these industries.
Now we got to figure out and we've just tried
to figure out a reallocation of value there. Now we
(19:30):
got to figure out a reallocation of value when it's
not humans that are actually making the creative product. Like,
what does this even look like? I think my hopeful
take is that we are figuring it out a little bit.
Like with streaming, I think there have been some progressive
models that have rewarded humans, even though that may be
(19:52):
fractional compared to what the ownership model used to be.
Like when you bought an album, the songwriters were better
rewarded than when you stream a song. Certainly, meanwhile, someone's
pocketing most of the money that's transacting and it isn't
the people who are making the most creative product. I
(20:14):
would like to think that we are ripe for a
disruption that is that favors creators. That's some kind of
model that can look to put some protections in place.
Not that mean that we don't use AI tools to create,
because I think there's incredible product that can come from
(20:34):
using AI tools to make really inventive types of creative output.
But I think there's got to be a place for
human creative output, and we have to figure out a
way to value it, or else we're going to make
it impossible for people to pursue that field and with
it the entire entertainment scaffolding that goes around it. So
(20:58):
how do you stay grounded, I think is a hard question,
although I think there's a strategic optimism is my model.
I believe in the idea of like naming the threat,
talking about risk and harm, and really managing that risk
and harm, and at the same time identifying what it
is you hope for and what it is you.
Speaker 3 (21:17):
Plan to work toward. So it's a both and situation.
Speaker 2 (21:23):
In this case with entertainment, I think, like, there's legitimate threats,
and like, what could we do, what could we how
could we create new models? Could could that be an
LA initiative that creators and tech savvy folks who are
in the entrepreneurial space entertainment wise, can get together and
figure out new strategies for capturing some of that value.
(21:48):
Rick back to your point of the pg P and
G folks and all of those those sort of synthesized
ideas that are together in this thank you for discussion.
Speaker 1 (21:56):
We're going to take a quick commercial break. I'm going
to throw it to you first, Kate, tell our listeners
where they can purchase all of your books.
Speaker 2 (22:06):
You can find it at what matters next book dot com,
but that will just redirect you to my website, which
is koinsights dot com and the books section there specifically.
So yes, I would love to have your listeners pick
up the copy of the book, of course, but also
reach out and let me know you know what it
is that resonates with them.
Speaker 1 (22:26):
And I'd love to add for our alumni networks that
listen to this, how can they contact you to be
a keynote speaker at their next event?
Speaker 2 (22:41):
Well, funnily enough, also at koinsights dot com they can
find these speaking header on that page and reach out there.
Speaker 5 (22:51):
This is Marcus Aurelius reappearing to proclaim that truly significant
conversations with big hearted people is a rare piece of literature.
This book reminds me of one of my more stirring quotes,
waste no more time arguing what a good man should
be be one If you're stepping into your next life
(23:15):
chapter of your career and questioning what lies beyond success.
Speaker 6 (23:21):
This book is for you.
Speaker 5 (23:23):
Dive into forty soul stirring stories from luminaries like doctor
Jane Goodall, Ed Asner, and Emily Chang, stories that urge
you to pursue purpose, serve others, and build a legacy.
Speaker 6 (23:39):
That outlasts you.
Speaker 5 (23:41):
Authored by Rick Tolkini, Truly Significant will challenge your view
of success and ignite a life of impact. Order now
at TinyURL dot com, backslash truly Significant and begin living intentionally.
Maybe your epitaph will.
Speaker 6 (24:02):
Read she gave outrageously extended grace unceasingly and lived to
help others so that death found her empty. Visit truly
significant dot com and celebrate the most truly significant people
in your life with the truly Significant community. How bold
(24:26):
of you to make your next chapter matter and be
truly significant.
Speaker 1 (24:32):
We are back with the one and only Kate O'Neill,
the tech humanist, and she is a futurist. And for
you listeners out there, you know that we love talking
to futurests because they think way outside the box. They
see things far before we see them. And we're also
(24:52):
joined with Jennifer Tolkenny representing the entertainment industry, and Jennifer's
got one more question before she I.
Speaker 4 (25:00):
Think as a final question, how I guess, how can
companies ensure that technology serves humanity and not the other
way around?
Speaker 3 (25:10):
M Yes, well asked.
Speaker 2 (25:15):
I think that there's a number of practices here that
are really helpful. I think most one of the things
that's very interesting to me is that people assume that
leaders of companies have absolutely no interest in doing the
right thing or creating meaningful human experiences or whatever. A
lot of times I will hear from people like you
(25:36):
think leaders care about you know, ethics or responsible tech,
And the truth is yes, I meet leaders all the time,
all the time after my keynotes, who come up to
meet in one on one ask me questions and we'll
say we needed this vocabulary, we needed these frameworks, and
it's really hard to have this conversation with my board.
(25:57):
I need there to be more am so to speak,
although I'm a non violent person, so Ammo's a terrible metaphor,
and more more ling for the fire to make sure
that we can make this transformation. I think that part
of it is part of how we make you know,
more human friendly tech decisions or you know, create business
(26:20):
decisions that favor humanity. Is we have to be honest
about what it is we want to accomplish. We want
to accomplish that, Like nobody doesn't want to live in
a world that is set up better for humans, Like
we're all humans.
Speaker 3 (26:34):
We would we would love to have that.
Speaker 2 (26:36):
Yeah, I think it's just a question of reconciling these
kind of classic tensions of we want more profit, we
want more growth, like, but we also understand like we
need to do the right thing. But do we really
like do we need to do the right thing? Is
that is that gonna get us our our quarterly margins?
(26:56):
And so I think we have just a lot of
opportunity to have much more emotionally intelligent and intellectually honest
conversations about the richness of what we could create in
the world. And that that I think requires integrating some
(27:17):
of these viewpoints to be able to say alignment right,
Like we need a lot tween what the business is
trying to do, what people outside of the business want
to have happened, and what the technology, technology and other
resources can do, what the capabilities are so we can
make sure that all those that are aligned.
Speaker 1 (27:36):
Okay, as Jennifer departs. If Jennifer were if Jennifer's company
get to do a documentary.
Speaker 3 (27:42):
On you, yeah, nice.
Speaker 1 (27:44):
What would be the title of that documentary?
Speaker 2 (27:54):
Oh gosh, that is a fun, provocative question. I don't
even know, but I'm going to play with that one
for a while.
Speaker 3 (28:04):
What should it be? Jen? What do you think it
should be? You're the expert. Oh, I don't know the
expert on me, but you're the expertise title It.
Speaker 4 (28:14):
Maybe like a line that you say frequently, or it
doesn't necessarily have to evoke a strong feeling. It could
just be something that you because the thing with titles
is it has to be intriguing enough for someone to
want to watch it without giving too much information.
Speaker 3 (28:39):
I could go totally silly.
Speaker 2 (28:40):
And when I used to be asked about like what
it is that I would tweet about all day? I robots,
cats and beer.
Speaker 4 (28:47):
That's so that's Perretty, Cats and beer coming to a
theater in twenty twenty seven.
Speaker 1 (28:57):
We're talking, We're let's into the space of truly significance. Now.
When we wrote this first book, we had the opportunity
of interviewing Jane Goodall, and she is the final chapter
and we're so glad that she allowed us to talk
with her. She was the founder of the Jane Goodall
(29:20):
Institute and the United Nations Messenger of Peace and beyond
that though, it was just a great person who really
cared about what she did and the message that she
the message of hope to the rest of the world.
And then we talked to a lot of other people,
and what we learned along the way was it wasn't
(29:41):
about the fame as much as it was about the
condition of their heart changed somewhere along their journey. And
when I got information from your publicist about you, I
just read it and reread it, and I said, there's
a story inside Kate's story that we need to uncover
(30:07):
and let the world rest of the world know about it.
And so if I had a journalistic mission today, it
would be to get to the heart of you. And
I think we're getting close. When we talk about aspiring songwriter,
there's some music or sound going on in your head,
some real or film that's rolling and it's manifesting itself.
(30:33):
And what you're doing today as a futurist and as
an author, but there's something else inside there that made
you switch on from success to significance. Tell me about it, Kate.
Speaker 2 (30:50):
Oh, that's such a great provocation. I think that success
is significance. I think significance is success. I am all
about significance because significance is meaning, right. What something signifies
is what it means, and meaning, as I've already alluded
(31:14):
in this conversation, is to my mind, the most central
human condition. It is what we uniquely care about. Like
no other animal that we know of cares about meaning
the way we do, and machines don't yet care about
meaning the way we do. It is uniquely ours, and
(31:38):
it scaffolds from this baseline of semantic understanding, you know,
communicating something in words and what those words mean when
we say them to one another, all the way up
through layers of purpose and patterns and truth and significance,
all the way out to what the most big picture,
(31:59):
macro like existentialist cosmic questions of meaning, like what's it
all about? And why are we here? And those are
all meaning, but when you try to distill them down
to a central idea, it is about significance. But the
question is what matters, and that's What matters next is
(32:23):
even though it is a book that's written for leaders
and helping leaders make better decisions, and that sounds very
corporate and there is an element of being of service
in the corporate space, it is a very personal construct
to me because what matters is that heart of the
question that I've been asking my whole life around meaning
(32:45):
and how can I help people get better in touch
with truly what matters, and how they can align themselves,
align their work to that, How they can you know,
be more in concordance with what they value and what
they're doing in the world. I think that that's that
(33:09):
misalignment of meaning is one of the most tragic missing
pieces in so many lives and in so many businesses,
and the businesses that thrive are the ones where you
can really see the concordance through you know. One of
the examples I use about why insights are so powerful
(33:30):
in ethical acceleration and tech decision making is because you
can think of an example like Apple and you can
think about how clear it is that Apple is a
design centric company.
Speaker 1 (33:42):
Right.
Speaker 2 (33:42):
That's an insight about Apple. Apple has known since the
Steve Jobs formation, right that they're very much about design.
And so you can imagine a scenario where someone brings
a leader at Apple a product that they would like
to ship or a feature they would like to ship,
and yet they're saying, well, it's not truly designed. Well,
(34:03):
yet you know we don't have it ready. It's an
easy decision. That's a no, we're not ready to ship.
Speaker 3 (34:09):
That, not for Apple.
Speaker 2 (34:10):
Maybe another company would ship it, but Apple knows that
it's not ready to ship, because that is an truly
intrinsic thing about Apple. And so what I always do
when I illustrate that story is to say, what is
it that's just as true for you, like for you
for your organization. What's an insight about who you are
(34:31):
or who you are as an organization and what you
value that you can accelerate the decision making that you're
doing in a truly aligned and ethical value centric way
because you know this ay that is meaning.
Speaker 1 (34:50):
And so that's getting when you go to present to
an organization like the world of Walmart. The thing that
I wasn't there obviously, but I can tell you from
past experience of hearing world class speakers and thinkers like yourself,
(35:12):
is that the sad thing is you people will come
in with a book, we will go through classes and
then at the end of the day or the fiscal
year or five years after, where where is the where's
the imprint? Why aren't we thinking like Kate O'Neill. Why
(35:36):
aren't we acting like Stephen Covey taught us? And it's
it's like, please, don't think of Kate as a one off.
We're bringing her in to speaking and sell a book.
This is about. This is a philosophy that needs to
have stickiness to it.
Speaker 2 (35:59):
Yeah, you know, there's a new model that I'm launching
with my company and I'm really excited about it. We
have been trying to think about what it means to
grow in a meaningful way for KO Insights, and the
model that we're launching is called ten thousand boardrooms for
one billion lives. So the idea, as I hope it's clear,
(36:22):
is that we would very much like to and we're
measuring this. Of course, get into metaphorically or physically ten
thousand actual boardrooms of companies, organizations, or even cities. You
can use it as a metaphor for the leadership of
a city. I do a lot of work with cities
and helps steer the decision making that's happening there such
(36:45):
that it benefits one billion downstream lives, and it's easy
to approximate. You know, we're gonna have to use all
sorts of approximation approximations, but you think about a company
like P and G or like Walmart, it's very easy
to amma the many, many people who are downstream of
the decisions that are made in those boardrooms or near
(37:06):
the boardrooms, right like second and third sort of order
level down from boardrooms, there's still very potent decisions being
made about technology, about strategy, about customer experience, human experience.
And I would very much like to do exactly what
you're saying, to form to create more of a presence
(37:29):
that's remembered in those organizations, so not so that I
have to be there, but so that there is a
sense that we're part of this movement that you know,
Walmart or Ikea or whoever is part of the ten
thousand boardrooms for one billion lives movement, and that helps
(37:52):
us remember to make those decisions in a way that's
aligned with what we're saying back about with what we're
saying our values.
Speaker 1 (38:01):
And you know, the people that are about to hear this,
have heard it one hundred times from me. Core values
have to be strategic, which lead to authentic purpose and mission.
Stop posting mission statements and core values on the walls
if you're not going to walk the talk.
Speaker 3 (38:27):
One hundred percent.
Speaker 2 (38:28):
Rick, Yeah, and I actually think mission vision and all
these other statements are often times a couple of sentences
of quasi language that no one can remember, no one
can recite. And what I am a much bigger advocate
of is strategic organizational purpose that is articulatable in sort
(38:49):
of three to five words, and that everybody knows. And
so the great classic example for me of this is
like Disney theme parks. You could say create magical experiences,
and everybody across the organization understands that to be their purpose. Right,
So if you are working in a call center and
(39:12):
taking booking calls from guests to be or if you're
working as a janitor on main Street in a theme park,
you all understand that creating magical experiences is your purpose,
and so you feel empowered, assuming that the company has
done a really good job, which Disney theme parks have
done a very good job of letting people know that
(39:34):
they are empowered within you know, within reason to make
that happen to you know, cause create refunds or set
things up special for people, or go out of their
way to make sure people feel seen and valued and
that they're having the time of their lives. That is
so clear at a place like that. And I'm not
(39:55):
even a Disney fan, but I am a fan of
purpose well articulated, that is well executed upon, and they
certainly do it. So they're a great example of that,
and I think other organizations can easily see how they
could articulate their three to five words and bring a
lot of their brand and culture and experience, design and
(40:18):
data and technology into better concordance with that well, because
the piece I even forgot to mention about Disney theme
parks is then you can think about how you can
deploy a multi billion dollar project like the Mayamagic band
wearables and have them be integrated in this Internet of
things systems, beacons and sensors kind of a program where
(40:42):
that information is something people are just carrying around with
them and magically it seems to unlock doors and magically
it seems to carry payment information and reservation information for
restaurants and get you onto rides.
Speaker 3 (40:57):
The fact that you.
Speaker 2 (40:58):
Can bring technology into concordance with those values and that
purpose is an even greater clarification of why this matters
and why it's so important, because that's part.
Speaker 1 (41:09):
Of what you're saying when you're out speaking to these
companies is to find your two or three word mantra
from an Apple.
Speaker 2 (41:22):
Three to five words strategic organizational purpose.
Speaker 1 (41:26):
Okay, So again I let your career and your books
and everything you've written wash over me, and I'm before
the show, I was thinking, what can I ask Kate
to make sure that she understands what we would love
to accomplish? And I think it's this. I think that
(41:49):
you are. You could be the medicine that certain sectors
need to take right now, and that I think you could.
You could provide a practical, calm voice about what are
you going to do with your future and stop being
so freaking scared and intimidated by AI. So let's start
(42:16):
with a couple of weird little questions. Is A supposed
to be an extinction filter?
Speaker 3 (42:27):
An extinction filter? I'm not sure I know what you
mean to start.
Speaker 1 (42:30):
To use AI? Will you become extinct?
Speaker 2 (42:35):
M I see what you mean. On a personal professional level,
I don't know if it's meant to be an extinction filter.
I certainly see how the way that we currently operate
and the way that AI is likely to roll into
more and more work roles, it's certainly will function that way.
(43:02):
I think one of the things that the distinction I
make when we talk about the future of work, I
think we conflate a lot of the time the future
of work, the future of jobs, the future of the workplace,
the future of the workflow, the future of productivity, the
future of tasks. Like all of these are different futures.
They happen to fall into relationship with one another. But
(43:25):
it is really important from a future of jobs perspective
that individuals understand the technology that's emergent all the time.
Speaker 3 (43:35):
And right now.
Speaker 2 (43:36):
I use this term minimum viable skilling. So we talk
a lot about upskilling and reskilling, and the term I've
started using is minimum viable skilling. And the minimum viable
skilling I think for people is prompt skilling. Because we
understand that generative AI, large language models, and even the
agentic AI tools that are starting to pop up more.
Speaker 3 (43:57):
And more are largely.
Speaker 2 (44:00):
Driven by prompts. And the thing about prompts that's so
interesting is once you have if you write a good prompt,
if you articulate well what it is you're looking for,
it is really not that different from delegating to a person.
So in either case, you have to be clear in
your own head about what it is you want and
(44:22):
what the success criteria are, and you know sort of
the boundary conditions of like, don't do that, do do this,
Like I definitely want you to include that, format it
this way, and go. And once you do that, you
can get great work product out of generative AI. You
can also get great work product out of other people
(44:44):
generally speaking. So I think that's that's one of the
things that people should really lean into, is the minimum
viable skilling, which is prompt skilling and getting good at that.
Speaker 1 (44:55):
Allow me to elevate you to Jane Goodall said, the
more important they became, and that you are. Maybe she
was a linguistics specialist as well, because she just said
the right things at the right time, and you remind
me so much of her, it's possible that you you
(45:20):
could be the calming voice.
Speaker 2 (45:27):
Well, that's really kind. I think it's partly my resting
bliss face.
Speaker 3 (45:30):
I don't know why I have, and I'm just always smiling.
Speaker 1 (45:35):
Let me read. That's a wonderful It's about the courage
to follow your curiosity, the humility rick to recognize the
interconnectedness of all life, and the determination to leave this
world a better place than you found. It's like he's
still my heart. She told me that personally.
Speaker 3 (45:58):
Absolutely, Oh, I want I'm looking.
Speaker 1 (46:02):
I love that for you.
Speaker 3 (46:03):
Yeah, I mean I.
Speaker 1 (46:04):
See something in you. I hear something in you. I
read enough about you to be dangerous that you could.
What do you want to say in this If there's
a quintessential chapter about significance about pet where you can
calm the sea and have people look forward, that's what
(46:27):
Maybe we should end the show today on that, and
then I'll get back to you with other questions. How
about that?
Speaker 3 (46:35):
Okay, I will say this.
Speaker 2 (46:38):
It's so funny that that is one of the greatest
compliments that I've ever been paid in my life. And
the other was when someone compared me to Peter Drucker.
And I would like to think that maybe at my
best I'm a synthesis of the two, or I aspire
to be right Peter Drucker meets Jane Goodall. That is,
(47:00):
that is what on my best day I am hoping
to be. But I think that it does come down
to sort of the synthesis of Peter Drucker's absolute clarity
about strategy and Dame Jane Goodall's absolute clarity about hope
and purpose and humanity. Right, there's there's this middle path,
(47:24):
this third way that the to those two.
Speaker 3 (47:29):
Ideals set for us.
Speaker 2 (47:32):
And I think that middle path is really defined by
a clear and insistent focus on human meaning, meaningful experiences
for humans, on alignment between what is good for people,
what is good for the planet, and what's good for business,
(47:53):
because ultimately we know that business is the engine that
sees these things forward. So we you know, we can't
we can't be anti capitalist. I'm not anti capitalist. I
was asked that on a podcast at one point, like,
are you a socialist? I No, I'm not a socialist.
I can see how you'd arrive at that, but no,
I'm not. I am a capitalist and I am but
I am very much a compassionate capitalist. I am very
(48:15):
much about what could we do to remake capitalism. It's
not doctrine, it doesn't have to be sacred. What could
we do to remake capitalism so that it is actually
achieving more of the goals we would like to set
for ourselves. How could it make the world better? How
could it make humanity better? How could it create the
conditions for flourishing for humanity now and in the future
(48:40):
that would be worth aspiring to. What could we do
to make the world better for more people more of
the time. I mean, it's it's such a non ambition,
it's like such a giveaway sort of line. But I
think it's a really really important question to be asking ourselves,
(49:00):
as often brilliant asking.
Speaker 1 (49:03):
And with that we will close today's chapter and thank
you Kate for being on in her book is what
matters next to making human friendly tech decisions in a
world that's moving too fast? And hopefully she will be
appearing in our next truly significant book before she spends
(49:24):
out her next brilliant book. But I think we caught it.
I think we caught lightning at a bottle today. No,
thank you so much, And as we always say, folks,
thank you so much. Hope that you can get your
arms around the concept of success while being significant in
(49:48):
this world.
Speaker 6 (49:49):
Break