Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Colin (00:02):
Welcome to the Inside Out
Culture Podcast, where we look
at insides of working cultureand provide ideas, insights and
actions for you to take on theoutside.
I'm Colin Ellis.
Cath (00:12):
And I'm Cath Bishop, and
in each episode we'll examine a
different question or adifferent organization, and
we'll use case studies, researchand our own insights and
experiences to help you changethe way things get done in your
world.
Colin (00:27):
We hope you enjoyed
today's episode.
Please like, subscribe and, ofcourse, let us know what you
think.
Cath (00:33):
Hello and welcome to
another episode of the Inside
Out Culture podcast, and this isone of our guest episodes that
we have been really excitedabout.
Con and I have chatted on andoff wanting to look at this
topic around the role of AI inculture and particularly if we
could get the guests we reallywanted and we have, so Colin
(00:54):
who's with us.
Colin (00:55):
Yes, I'm delighted that
we are joined today by Bruce
Daisley, and Bruce is abest-selling author of three
books the Joy of Work Eat, sleep, work, repeat and Fortitude and
a highly regarded keynotespeaker.
And having led some of the mostfast-moving, high-profile
companies in the world, bruceunderstands there's a big
difference between what mightwork in theory and what works in
(01:20):
reality, and he's now one ofthe most respected thought
leaders on the subject ofworkplace culture and the future
of work, and his discussionsare always filmed, always filled
with practicality, warmth andhumor.
And you know what, bruce?
We love?
A bestselling author on theinside out culture.
But it's the last three thingsthat got you the invite.
Who's got?
Practicality, warmth and humor.
Bruce (01:42):
So no pressure.
Thank you so much for having meLong-time listener, first-time
caller.
In fact, you and I had lunch,didn't we, colin?
Because I was a super fan ofyour podcast.
So, yeah, I'm delighted that Igot the opportunity to come and
talk.
Colin (01:59):
So it's also worth
mentioning that Bruce himself is
also the host of the highlysuccessful Eat, sleep, work
Repeat podcast, and if youhaven't listened to it, you
should absolutely go out andlisten to it.
Somebody asked me recently oh,what podcasts do you recommend?
I recommended three culturepodcasts.
They were like is that all youlisten to?
I was like well, it's not allI've listened to, but it's
mainly what I listen to becausethere are so many great ideas
(02:21):
out there.
So in a way, this is a bit likeone of those cartoon mashups
where the Simpsons meet thefamily guy.
I don't know what that makes me.
Probably Peter Griffin, I wouldexpect we're going to talk about
AI.
Let me start with a statisticand then let's really have the
conversation and unpack where weare with AI.
What do people think about it?
(02:43):
When are we going to get anykind of gains from it, if at all
?
So Upwork did a survey recentlyabout two and a half thousand
people in the US, uk, australiaand Canada, and 96% of the
executives said that they expectAI tools to increase the
overall productivity, with 81%acknowledging they've increased
demands on works in the pastyear, and yet 77% of employees
(03:06):
said that AI has actuallydecreased productivity.
Are people sick of it, bruce?
Are people sick about hearingabout AI and artificial
intelligence, this, that and theother?
Is that where we are right now,would you?
Bruce (03:17):
say.
I definitely recognize some ofthat because a lot of people, I
think, have either experimentedwith this and maybe saw evidence
of the hallucinations andthought you know what?
This is slightly overhyped.
I do see a big contingent ofpeople who fall into the
category of saying no good'sgoing to come from this, and
(03:37):
quite often a lot of creativepeople really sort of talented,
creative people have fallen intothat camp.
Do you know that old DouglasAdams rules for technology?
Douglas Adams, the author ofHitchhiker's Guide to the Galaxy
, and Douglas Adams broadly I'mgoing to ruin it, but he said
something like anything inventedbefore you're 18 is natural
(03:59):
order of things and has alwaysexisted.
Anything invented between theage of 18 and 35 is new and
exciting and presents bigopportunities.
Anything invented after the ageof 35 is the end of
civilization as we know it andmust be destroyed.
And there's a lot of truth inthat.
A lot of people have sort oflooked at this.
(04:20):
Here's one bit of evidenceCollege students 100% of them
are using AI.
When surveys have been done,they can't find anyone who isn't
using it.
But when we look at theworkplace, there was a survey by
the British Chambers ofCommerce that said that only 25%
of employees, of workers, saythey routinely use AI as part of
(04:40):
their job.
So the Douglas Adams rules oftechnology are giving us a fair
pointer, I think, for how we'reembracing this.
And look, I think that's one ofthe challenges going forward.
Can any of us look at this witha fresh pair of eyes, or have
we made our mind up on it?
Cath (04:55):
So mindset is important,
isn't it?
When we're trying to dosomething new, when we're trying
to understand, actually,whether it's technology or
whether it's simply how do Iimprove performance in any sense
?
And as somebody with an Olympicbackground, I kind of am always
thinking we don't really leaninto our full capacity, that we
(05:16):
have, that our mindsetdetermines what happens.
So what sort of mindset, whatsort of ways of thinking, what
sort of perspective?
So what sort of mindset, whatsort of ways of thinking, what
sort of perspective should webring to help us unlock what AI
could do in a positive way?
Bruce (05:31):
I think a lot of it is
about practical optimization.
One of the best things I saw isthat no doubt there's a lot of
noise around this and a lot ofus are saying you know, ai is
going to transform our jobs.
It's not helped by the factthat I'm not sure if you saw
that organization, klarna.
Klarna came out and said, oh,we've replaced a quarter of our
workforce with AI agents and,interestingly, they came out
(05:54):
about six months later and saythat only just a few weeks ago
they said, oh, we are hiring newpeople again.
And the interpretation of theFinancial Times was they felt
that Karno was trying to pump upthe tires of their valuation by
saying we've seen the futureand it's robots and so things
like that aren't necessarilyhelpful.
An interview and he said onething that he wants to do is he
(06:16):
says I think our first goal withAI needs to be to try and
create 10% growth.
I love that because it's soachievable, it's so practical.
It's like if someone said toyour organization could you grow
(06:38):
10% faster next year by usingnew technology?
It feels sort of touchable,doesn't it?
It feels like the sort ofmarginal gain, the incremental
gain that I think you wouldexpect and accept that maybe
some innovation is delivered.
And I think the best part aboutthat is that it produces a
mindset which says to us okay,this is not necessarily
(07:01):
revolution, but this isoptimization and this is a tool
that's going to enhance yourworkers.
And so I think that's got apractical way for any of us to
think about this.
What's the 10% improvement thatAI could give?
Now I chatted to someone fromSpecsavers this week and they
were sort of publicly on therecord talking about it and they
(07:21):
said you know, they'veimplemented with a small team,
they've implemented MicrosoftCopilot and they're just using a
sort of very practicalapplication to try and get these
things done.
And I think the spirit of theconversation we had was adjacent
to the way that most of us usetechnology in our private lives.
Probably you never saw an adfor Uber, or you never saw an ad
(07:44):
for Uber, or you never saw anad for Instagram, or you never
saw an advert for LinkedIn.
But either someone said, oh,give this a go, or you saw them
summoning a taxi with justtapping the screen of their
phone and you thought whatwitchery is this?
And you thought and I thinkthat is one of the critical
things like the word of mouth,and the word of mouth and
(08:06):
building word of mouth insideorganisations is probably quite
an important baby step for us tothink about here.
Cath (08:12):
That's interesting.
That's also very democratic,isn't it?
So just a quick comment I likethat because actually building
word of mouth that's involvingthe community, isn't it?
That's about all of us, notjust the people at the top.
So I really like that kind ofidea of that rippling through,
yeah, but there's an interestingattitudinal element, isn't it?
Bruce (08:30):
Peter Kyle, the business
secretary, a couple of weeks ago
got caught Brilliant freedom ofinformation request that
someone put in, but someone saidI want to see chat GPT requests
Amazing, what genius journalist.
Anyway, and some of hisrequests were what policy should
(08:54):
I be doing here?
What should I do on this?
Actually, this is a textbookexample of how these tools can
get us out of a rut, because noone's saying that he would say
press, go and implement that aspolicy, but what they are saying
is that he's maybe going tolearn.
Yeah, I didn't get what I waslooking for the first time.
Can I make my prompt a bit moreexpansive?
Can I actually make it a bitmore simplistic?
Can I say tell me what they'redoing in Japan, tell me what
(09:16):
they're doing in differentcountries of the EU.
It was a really good example.
The interesting thing is it wasmet with criticism.
It was met with the normalgroup of people saying, oh,
that's not how you do things,and actually I think we need to
all go overcome that.
You need to all think okay,well, how can we embrace a bit
(09:38):
of sort of technologicallydiverse thinking?
I think you know it was atextbook example of how to
experiment with these things tofind where the good stuff lies.
Colin (09:48):
I think part of the
problem and we've all seen this
for organizations who do digitaltransformation projects they
never get the gains that theyexpect and then kind of 10 years
later they actually start tosee some of expect and then 10
years later they actually startto see some of those gains.
And I read some interestingresearch called the productivity
J curve if you draw a J is thatthere is an initial dip in
(10:10):
productivity because it takestime to learn the new systems.
We have to redesign things likeworkflows.
The culture inevitably has togrow and evolve and I love the
experimentation piece that youtalked about with spec savers.
And then, just as that kind ofadjustment period kind of turns,
that's when we start to seesome of those costs.
(10:31):
I think it was Goldman Sachs whosaid that they don't expect to
see any kind of return on AIuntil 2027 from a business
perspective.
But that's only if people do itin the right way.
And similarly with digitaltransformation work, really the
culture needs to change beforeyou implement the tool.
I think many organizations mysense, annie, is they're rushing
(10:52):
to say we're going to adopt it,but there's no guardrails,
there's no thought given in theway that Spectrevers did it.
Well, how do we do this?
What do we want it to be usedfor?
Do you get that sense as well?
Bruce (11:02):
Yeah, no doubt,
absolutely.
I mean, look, you know, thecritical thing is, I suspect all
of us, when the first interestin chat GPT came around, we all
had a go and maybe we witnessedsome of the hallucinations.
These products have definitelygot better.
But even three months ago Iexperimented with Google's
(11:22):
Gemini, which the reason why Imentioned this now is because
about a couple of weeks after Idid this it got significantly
better.
But I said, look, all three ofus here are interested in
business and performance andculture.
So I said to it can you give mein the area of team dynamics,
can you give me some papers thatI should be reading, the latest
(11:44):
papers about team cohesion,right, the sort of geeky stuff I
love reading.
Anyway, he recommended me sevenand I said and I want you to
tell me these have got to bereal.
I need you to check these arereal.
And even with that caveat, itcame back with seven papers and
four of them didn't exist and Iwas spitting feathers.
(12:05):
I was absolutely furious aboutthis because I was going to read
them.
Anyway, a couple of weeks laterthey launched their deep
research product and I have totell you, it's night and day
because now you go to that andit costs you about 20 quid a
month.
You can normally experimentwith it for free initially.
And you get now and you getproper papers.
And for me, I was telling you,colin, that now I read a paper
(12:29):
like that and because I'mself-taught, I'm not at a
college university, I go to itand I say I'm reading this paper
.
Can you give me yourperspective?
In addition, can you tell methree or four papers that have
cited this paper?
So you basically get to see thefuture.
What did that influence?
And it's been revolutionary forme.
(12:53):
So look what's that story got.
It's got the health warningsthat these products have got,
but the speed that it will dothis work.
We would be foolish to ignorethe benefit that these things
are providing.
So you know, definitely comeswith a health warning, but I
think they can betransformational.
I love listening to.
There's a podcast called.
They went route one on this.
(13:13):
They called it the artificialintelligence show.
It's called.
Anyway, I listen to these twochaps every week and what I like
about it is they tell you thelatest news on this.
But in addition, they give youapplications.
So one guy he said he was aboutto do a keynote presentation.
He uploaded his presentationthat he'd given to ChatGPT and
(13:34):
he said coach me on how I did.
And it came back with promptsabout how he staggered his
speech.
It gave him pointers about howhe was talking.
Now you know, this is dazzlingright and I wouldn't have
thought to do that.
And so you know, look, here'sthe opportunity for all of us
here is that this isrevolutionary technology, and
(14:00):
the danger of trying to appearlike we've got a really clear
answer on this might end upbeing a limiting factor in our
own development here.
There's a really interestingthing that those two chaps say
who run that podcast?
They say the vast majority ofpeople who are thinking about AI
right now and training andlearning how to do it, doing it
in their spare time.
(14:20):
They feel that theirorganizations are failing them.
And I chatted to a chap whobuilds an app for a very, very
well-known car brand, actually,and he said, oh, all of my
coding now is done by AI.
I said, all right, and what'sthe company philosophy?
What system do you use?
(14:41):
He said, oh, our companyphilosophy is we don't use it.
So he has his own account with.
Now companies are being leftbehind, and here's the challenge
is that, unless you're going tothink about how you can get
these things on board with allthe health warnings, you've got
to train people what thepitfalls are.
We saw those lawyers who endedup using chat, gpt and they
(15:04):
ended up citing case law thatdidn't exist.
All those health warnings wecan train.
But there are big opportunitiesthat we're being presented with
here and I think we need toovercome some of our maybe sort
of cautionary notes, cautionaryelements, and we need to
overcome that because I thinkthere's a big opportunity for us
to really make step changes, ofimprovement.
Cath (15:26):
So to do that, we might
want to think about what's the
culture change that will enableus to ask those questions.
Yeah, what is it in our culturethat's stopping us, that makes
us feel afraid?
And for me, there's a huge partabout having a learning culture
and it's interesting you quotedSatya Nadella earlier, who's
very famous for changingMicrosoft from a know-it-all to
(15:47):
a learn-it-all culture.
There's that sense thatcuriosity is just part of what
we do and experimentation ispart of what we do, so there are
some things that potentiallywe've not been creating.
A culture that sets us up forthis.
I think collaboration is anotheraspect, because inevitably, I
think it's quite hard toexperiment on your own.
You sort of experiment a bit,but then you sort of want to
(16:08):
check your notes becauseactually, just like when you're
listening to the show, you go,oh, that's a better question I
could have asked and I can getthere quicker if I also get
other people on this.
So what is it we need in thehuman culture that's going to
help us really grab theseopportunities from the AI?
Bruce (16:27):
Look, I'm really
interested in your perspective.
I would say that a couple ofimportant things.
Right now, most of us, the oldtruism is beware the busy
manager, and most of us areoverloaded.
Beware the busy manager, andmost of us are overloaded.
We don't have spare time in ourday to get things done.
And so, you know, we've got todo a degree of system thinking.
We've got to say, if people aregoing to be experimenting with
(16:50):
this, what are they stopping todo?
What are they stopping doing?
Because the idea that somehowfriends of mine who have very
demanding, busy jobs, I've saidto them have you played around
with this at all?
And they've said to me I'mgoing to be honest with you, I'm
too busy.
I've used it at home.
I've very nature of experiments.
Failure is one of the options.
(17:24):
Here's an example I prepared fora presentation last week and I
really loved this, largelybecause it really appealed to my
main obsession.
I loved this example and Iplayed about five pop music
songs, big hit songs from thelast year or so Sabrina
Carpenter's Espresso, tateMcRae's, greedy APT by Bruno
(17:48):
Mars and Rosé, like these bighits.
Anyway, I said what have theseAdore you by Harry Styles, these
are big bankers.
And I said what have all thesesongs got in common?
All these songs were written byAmy Allen Possibly never heard
of her because she's like a sortof LA songwriter.
Now Amy Allen says she'swritten seven songs a week,
(18:08):
every week, for the last sevenyears.
Amy Allen has had seven hits.
Now there's a reallyinteresting lesson about that,
because this woman just wonsongwriter of the year at the
Grammys.
She's the best of the best andmost of what she does fails.
And if we've got this idea thatthe experiments that we do are
all going to be winners, theneffectively we're creating this
(18:30):
need for perfection that I thinkis unrealistic.
So my view would be number one,create the space for innovation
and number two, createpermission for it to not work.
So we're going to try a fewthings and if they don't work,
we're going to come back andshare what happened, and I think
that, for me, would be animportant element of culture.
(18:51):
And to the point of culture,you raise this really
interesting thing that I saw,Cath, which is that that Sam
Altman, the chief exec of OpenAI, was asked.
Well, now a lot of this codingis being done by the computers.
What would you advise talentedteenagers to go and study and he
(19:12):
said, okay, maybe being asoftware developer, a programmer
, is important now.
I would strongly advise softskills Now.
I think this is for anyoneinterested in workplace culture.
This is a really criticallesson.
This means making sure thatyour team have got the capacity
to be empathetic with each other, to be good with customers, to
(19:34):
listen first, to do all of thosethe good stuff that we've
always talked about.
I suspect it's never going tobe more differentiating than it
will be in this new era we'reabout to enter.
Cath (19:45):
I think that's so
interesting and I think we see
that time after time.
Don't we in the World EconomicForum sort of future jobs?
All of these surveys?
What are the skills that weneed?
Nobody's moving away from thehuman skills of leadership,
resilience, empathy yeah, inthat sense of leadership,
resilience, empathy yeah, inthat sense.
So for me, I keep hearing peopletalking about we need to
(20:07):
collaborate better, and wehaven't yet.
We don't know how tocollaborate.
We all work on our own, even ifwe're in a team, we don't share
with another team, and what Isee is that we taught
collaboration, we put it on thevalues, but we don't live it and
actually we don't grow upcollaborating.
We grow up proving ourselves inour exams.
We're very individualistculture.
(20:27):
There's a lot of show.
You've got the answer and it'salmost who can use ai better?
Can I put in a better questionthan you?
And that's madness.
For us to be competing, isn'tit when we can move so much
faster if we say I tried this, Itried this, oh yeah, what if we
now go here and what's someoneelse done?
But we're not set up.
So for me, I'm really interestedin helping leaders unpick why
(20:50):
the collaborative behaviourisn't there and actually you
know the incentives that arepreventing that and, of course,
incentives often get in the wayof failure.
You know, in this sense, that,well, it's all very well saying,
yes, we accept failure, butI've got to hit these numbers
and I've got to do this, andthat's not written in any of
these numbers.
So you know again, I often thencome back to this sense of what
(21:14):
does success look like?
Are we clear about all of theelements required in that
picture and are we rewardingthem?
Are we really developing them?
So I think, yeah, it's sointeresting that every time I
have a conversation about AI,and so I think, yeah, it's so
interesting that every time Ihave a conversation about AI, we
get round to soft skills, humanleadership, those things we
don't not discuss them.
They actually end up being backin the centre, which is so we
don't expect it.
But then why wouldn't we expectit?
(21:35):
Actually?
Because we are using it to helpus make progress.
Bruce (21:39):
Yeah, there was an
additional thing that I think
really plays into the workplaceculture discussion.
One of the things it's a reallyinteresting challenge of all of
this is that if we could do ourjob of thinking about this now,
we've got to sort of look downthe road and work out where
we're going.
But one of the things that Isaw discussed online a couple of
weeks ago was people said okay,well, if you vaguely suppose
(22:03):
that we are going to get to somelevel of general artificial
intelligence where computers aresort of as good at humans as at
some of the thinking stuff,then one that differentiates is
not going to be intellectanymore, it's going to be agency
.
They use this juxtaposition.
They said the thing that willdifferentiate success then will
(22:26):
not be who's got the biggerbrains, but what you're doing
with it.
How are you bringing it to bear?
What we can give agency topeople, it seems to be
transformational in terms oftheir own motivation and them
(22:50):
feeling like they've got a stakein something.
Well, let's have a look at whatwe've done to work in the last
10 years.
We've taken all of the agencyaway.
We've layered people withbureaucracy, even to the extent.
Have a look at this.
Here's a test that anyonelistening to this can apply to
themselves.
Have a look at your diary today.
How much agency were you giventoday about how you use that
(23:14):
most important asset, your time?
Because I suspect most peoplehave got half an hour free today
.
Now, if that's going to be thedifferentiator, if that's going
to be the differentiator, ifthat's going to be the magic
that determines what success andfailure is, then how do we
rewire our organizations forthat?
That's a really exciting forculture, people.
That's as good as it gets right.
(23:34):
How do we give agency to people?
But it's also somethingdaunting, isn't it?
Because I did some work with apharmaceutical company.
That was just a really goodillustration of this.
They said to me come and run aburnout workshop.
This is not a good story.
By the way, I did not come outof this story.
They said come and run a burnoutworkshop.
(23:59):
It's like a kickoff.
How many hours Does anyone herespend more than 15 hours a week
in meetings?
Every hand in the room goes up.
Right, I've answered somethinghere.
Does anyone here spend morethan 20 hours a week in meetings
?
A woman at the back piped up,put her hand up and said I'm
going to stop you there.
We all spend 40 hours a week inmeetings.
I was like right, great, we'veset the first.
(24:19):
We've identified the size ofthe problem.
Now let's work out how we canfix this.
How?
And she said I'm going to stopyou there.
We're not going to do any lessmeetings, any fewer meetings.
It's like okay, let me tell you, what followed was the worst 90
minutes in business history.
Because I have no answers foryou here.
If the patient isn't willing tochange, I cannot solve this
(24:41):
burnout problem.
But it's a good illustration,isn't it?
A lot of us are in this zonewhere we're absolutely frazzled
by the amount of meetings we'vegot.
We know that our organizationswould be a bit better if we just
had slightly fewer meetings.
We just can't do it.
Cath (24:58):
Yeah, I so love the
autonomy piece, because I think
we should have realized thiswithout AI as well, because it
is an intrinsic motivator, right, it's one of the things that
helps us to be at our best, todo our best work, whether it's
with tech or without tech.
So we've been really stupidanyway, and I think that's part
of some of the productivityissues we've had.
(25:18):
You know, it's a key element inthere.
So I think it's so interesting.
It comes back again and that'swhy we're not set up culturally
how we need to be.
I saw a research recentlysaying autonomy is much more
powerful than purpose, andorganizations have been sort of
overplaying the purpose card.
I mean it's important, notsaying it's not important.
It is an element.
Again, it's an intrinsicmotivator, but actually we're
(25:41):
missing this big one becauseactually we've stifled the
autonomy so much as you'resaying there that we almost oh,
how do we get that back?
But yeah, I think that isabsolutely, colin, that's our
world, isn't it?
Colin (25:53):
It's totally our world,
and it comes back to some of
those themes that we always talkabout, particularly this agency
piece, bruce.
What's needed from an AIperspective is curiosity,
creativity, innovation,resilience all of these things,
none of which anybody's got anytime to do.
I work with an organizationthey had one of their values was
curiosity.
I did a similar kind of thing.
(26:13):
Get out your phones, have alook at your calendar.
Last week, how much time didyou spend being curious?
One guy laughed out loud assoon as I asked the question,
which immediately answered it.
It's like One guy laughed outloud as soon as I asked the
question, which immediatelyanswered it.
You know it's like ah.
I'm like yeah, so it's zero.
And you're right, Cath.
Organizations are so concernedwith telling people that they
have autonomy.
Then they tell them what thepurpose is.
They tell them what the visionis usually badly written.
(26:35):
They tell them what the valuesare, and then they stuff their
calendars full of unproductiveactivity, which means that
they've got no time at the endof the week to do the things
that they really need to do.
They feel wrung out.
They take that home and it'sall collapsing around us and
then we wonder why they'rereally skeptical and cynical
about AI Crazy.
So I think, just to round thisup, we are naturally positive on
(26:58):
the Inside Out Culture podcast.
So we and Bruce, you called itright at the start there are
gains to be had from AI.
But I think organizations needto think differently and, much
like the Specsavers example,they need to think how are we
going to do this?
How do we give people autonomyover it?
How do we treat it as anexperiment so that actually we
create some blueprints forpeople to follow, such that when
(27:20):
we actually do adopt AI morebroadly if that's what we're
going to do then we actually aremore productive.
We're not saying we're moreproductive.
I think that's what we'resaying right.
Bruce (27:29):
Yeah, no, absolutely.
That's exactly it.
And look, you know, some of itis baby steps, experimenting.
Maybe, you know, maybe somepeople in the organization who
are more open to this.
Let them do it.
Let these people be the heatseekers.
I did some work with aprofessional services company
and the sort of organizationthat, because they talk about
(27:51):
proprietary data, they builttheir own custom AI and they
said what had happened waseveryone had tried it once, it
hadn't been very good and no onehad used it again.
Now, if you can find there's afew people in the organization
who are willing to push onthrough that and learn how to
use it better, then let them doit.
Let them be the guiding lightrather than trying to drag
(28:15):
everyone else into this future.
Let some people be thetorchbearers, because eventually
people will follow themFantastic.
Colin (28:22):
All right, so we always
end the podcast with three
actions people can take, whichwe include in our knowledge
sheet that we sent out.
If you want to get a hold ofthe knowledge sheet, you can
subscribe at any time.
So the thing to do isunderstand the role that AI will
play in the workplace.
I think just a little bit ofreading is what can we expect
from AI in the first place?
(28:43):
The second thing is to becurious about how you can
leverage the technology to youradvantage.
You know, don't just kind ofignore it.
Think about how.
You know, be curious about howcan I actually use it to enhance
the work that I do and thenreally uncover, from an
organizational perspective, whatgains can we make through a
focus on soft skills.
How can we give people agencyover their conditions?
(29:05):
How can we give people agencyover the decisions they make
with regards to to the use of ai?
Bruce, it has been.
It's been absolutely brilliant.
I always, I always think youcan tell how good a conversation
is, about just how quickly thetime has gone.
It has been fantastic to haveyou on the podcast.
Thank you so much for sharingyour insights yeah, huge thanks.
Cath (29:24):
I mean a very empowering
conversation about AI, which is
important because it can feeloverwhelming, and actually I
feel so excited about theopportunities and how we can
really start investing in thehuman culture we have around it
to make sure we get the best outof everything.
Bruce (29:40):
The honor's all mine.
Thank you for having me.
Cath (29:43):
Thanks for listening to
today's Inside Out Culture
Podcast.
Colin (29:47):
Please remember to like,
subscribe and, of course, share
with others who you think may beinterested.