Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to the
Actionable Futurist podcast, a
show all about the near-termfuture, with practical and
actionable advice from a rangeof global experts to help you
stay ahead of the curve.
Every episode answers thequestion what's the future of?
With voices and opinions thatneed to be heard.
(00:23):
Your host is internationalkeynote speaker and Actionable
Futurist, andrew Grill.
Speaker 3 (00:29):
In today's episode,
I'm joined by two amazing guests
from PWC Dushan Chandarana,partner and emerging
technologies leader, and JuliaHowes, a director, who helps
companies use data to makebetter decisions about their
people.
We're here to talk about theexciting opportunities in the
enterprise AI space.
Welcome both, but Dushan.
Interestingly, around the sametime, I was working at BAA
(00:51):
Systems in Australia.
You're working for them here inLondon.
What an amazing coincidence.
Perhaps you can introduceyourself and tell us how you
made your way to PWC and whatyour area of focus is at the
moment.
Speaker 2 (01:02):
So how did I start
off my life?
I started off as a computerscientist, so that's how I
trained and software engineeringwas my major.
Went off to work for BAA and wesort of talked about what that
was back then.
This was many, many years ago,so I was working on head-up
displays for various fighteraircraft, did that for a while,
(01:23):
kind of enjoyed it.
It was a good grounding and agood apprenticeship really.
But then just went into otherroles, went to work for banks in
the technology space.
Went to work for technologycompanies in the technology
space, surprisingly, and thenwent to work for Consonci's Been
at PWC for about six years nowand loving every minute of it
really.
Speaker 3 (01:41):
And Julia what was
your path here and what are the
areas you specialise in?
I love Copilot.
(02:28):
I've actually turned it on myOffice 365 tenant, so we'll talk
more about that, maybe give yousome tips on how to use it even
more efficiently.
So you're both working acrossmultiple industries.
So, darshan, is there any oneindustry that's truly leveraging
the power of AI at the moment?
Speaker 2 (02:41):
I think all the
industries are very interested.
Where I think we're seeing themost traction is where there's a
lot of client-facing activity.
So financial services have beenusing AI, with the capital AI,
for quite some time and theirpivot into Gen AI hasn't been
that difficult for them.
Really.
There's a lot ofexperimentation going on and so
on.
The other area that I'm seeingquite a lot of activity is
(03:02):
retail and consumer especiallyon the retail side.
So long story short, those arethe two areas I think that we're
seeing the biggest amount oftraction, but actually it's
Julia's area and other areaslike HR, like Contact Centre,
that straddle all industriesthat are seeing the biggest
disruption.
Marketing is another areathat's seeing quite a lot of
disruption.
Speaker 3 (03:21):
So I've spoken to a
number of guests about how we
need responsible AI Darshanmaybe.
How would you describe this andhow does it work in practice?
Speaker 2 (03:29):
That is a I'm not
even going to say $64,000
question.
That is a monumental question.
So we've been in theresponsible AI space for quite
some time.
You can go online, you cancheck out the papers and all
that sort of good stuff.
But we truly believe thatresponsible AI is a fantastic
framework to think about as youstart going on your journey.
And that responsible AI piecelet's just take the high level
(03:51):
piece here Just because you cando something using a technology
doesn't mean you should right.
So, really understanding that,understanding the societal
impact, understanding yourpeople's impact, your employees
impact and your customers impact, is where that framework kind
of fits in.
And yeah, there's quite a lotof information around it, but
it's that just being mindful.
(04:12):
I think that's the best wedescribe being mindful of what
you're trying to do, why you'retrying to do it and what the
long term ramifications of that,of those decisions, would be.
Speaker 3 (04:20):
Now it's fair to say
that Gen AI and ChatGPT a couple
of years ago peaked everyone'sinterest.
You said that a lot of yourcustomers have been doing AI
with a capital A for a while.
For those listening that knowthey need to get into it.
Where do they start?
Speaker 2 (04:31):
Another great
question.
The best way to describe it isit sounds a bit consulting-y,
but I'll say anyway, justunderstand what your strategy is
going to be.
Really figure out why you wantto do these things.
What you're going to do, Figureout where it's going to have
the best impact, whether it's acommercial impact, whether it's
a customer engagement impact,whether it's a loyalty impact.
(04:51):
Figure out for you what's thething that you want to start
with from that sort of lens thevalue lens.
And once you understand thatvalue lens, then you can start
on the journey of understandingthe use cases, understanding the
patterns, understanding thetooling.
What we have seen over the lastyear or so is just a scramble
around the tooling, a scramblearound the use cases, but
(05:13):
actually not a solid businesscase behind it.
So lots of experimentation, notvery much stuff going into
production.
Speaker 3 (05:20):
Julie, I love the
fact that you're focusing on the
people element, because we allknow that AI is going to impact
people and we need people totrain AI.
I read an article you wroteentitled Gen AI Needs an
Intelligent Approach to Adoption.
What are the components of thatapproach?
Speaker 4 (05:33):
Picking up on the
value side as well.
We're seeing that a lot oforganisations will struggle to
hit their value levers if theydon't get good employee adoption
.
But at the same time, I thinkthere's not been a technology or
a change that's been as complex.
So that's why we talk aboutthis intelligent adoption
approach and I think it'sunderstanding at its heart that
(05:57):
it's not going to be linear.
There's going to be lots ofpeaks and troughs as employees
get excited and then get accessto tools and then maybe get
disappointed, but then the toolslearn and get better, so then
there's a new wave of excitement, and so I think when we talk
about intelligent adoption, wetake a very data-led approach.
Of course, it is about beingresponsive to employees in that
(06:20):
journey and understanding whenthey're hitting those troughs
and what they need around it tohelp them.
But I think it's also reallynot assuming that everyone's the
same.
So very personalised to themindset and the individual
person, and not thinking thateveryone in a certain function
or a certain persona that'sorganisational based is going to
(06:42):
be the same.
So really getting to theindividual and their own
mindsets, their own history,their own background that they
bring to AI, because it's a verypersonal change.
Speaker 3 (06:53):
Now you mentioned
tools and I understand you
developed your own Genovo AIplatform, chatpwc, great name.
How did you develop it?
Why did you develop it and,importantly, how does it help
your consultants be moreproductive with clients?
Speaker 2 (07:05):
I think ChatPWC is
just the starting point for us.
How we developed it, what wedeveloped?
Essentially, we've taken anopen AI model and then
fine-tuned it, put a ring fencearound it so we can use it
without pushing our data outinto the public domain.
So it does enable our people touse the technology, get
familiar with the technologyactually even things like prompt
(07:26):
engineering, for example Justgetting used to that on a
day-to-day basis.
That's a really good use casefor it.
Over time, we'll see many, manymore tools coming out for very
specific reasons.
We have a partnership withHarvey, for example, which is a
legal version of it, again basedon the open AI piece.
But open AI isn't the onlyplatform out there, and so we
(07:47):
are constantly horizon scanningas well to see what else is out
there, what we should be doing.
We have partnerships withGoogle.
We have partnerships with AWSas well.
Their technology is coming onleaps and bounds too.
So a lot of horizon scanninggoing on, a lot of tool building
going on, and I think thatthat's, as an advisory firm,
that's really important for us.
We need to understand thewidest possible picture, because
(08:09):
each client will be different.
Each client will have adifferent need, a different
requirement and as much as we'dlove to say that this is the one
silver bullet, one tool thatsolves everything, that's not
really going to be the case.
So, understanding that horizonscanning piece, understanding
what's coming up and justkeeping abreast of what's going
on and how fast the pace is atthe moment, that's an important
(08:30):
piece for all of our people totake into consideration.
Speaker 3 (08:32):
So what are the sort
of things that customers are
asking you about when it comesto generative AI?
Speaker 2 (08:36):
The first question is
what is it Truly in that sort
of?
We've got lots of people whohave used it or use chat GPT
from their own personal use.
You know like let's go createan agenda for this or let's
create a holiday schedule forthat.
You know just simple thingslike that.
How do you then take that intobusiness?
How will it help me and whatwill it do and how do I release
that to my population inside theorganization?
(08:58):
Those are sort of the basicquestions that we get asked on a
regular basis.
But also that level set.
You know you've got thenon-exec directors, you've got
the board, you've got sort ofpractitioners.
They've all got a slightlydifferent view of what Gen AI is
and how it might work.
So just doing that level setacross the entire organization
is also something as an advisoryfirm we get asked to do quite a
(09:19):
bit.
Speaker 3 (09:20):
I think that's so
important because it's the one
technology that actually theboard can play with.
My parents in Adelaide,australia, have heard about chat
GPT.
How did they hear about it?
It was on the news, so I'm surethey've all played with it, and
I encourage my clients alsoplay with it with something for
work and play with it withsomething for your hobby.
And then the penny drops ah,that's what we can use it for.
Do you find that's dangerousthat the board's actually played
(09:40):
with the technology and maybethought it can do X when it can
do Y, and that level set becomeseven more important than, say,
cloud or IoT that they can'treally touch and feel.
Speaker 2 (09:48):
I think anyone who's
played around with the
technology will have a personalview and a personal opinion, and
it's great that they have that,because that generates passion,
that generates that sort ofview of I actually want to do
something with this.
With some of the othertechnologies you mentioned cloud
, iot, some of the other bitsthey're a little bit esoteric,
you know.
They're like well, what doesthat do for me at home, what
does that do for me here?
And you have to kind of well,you're using cloud every day.
(10:11):
If you're using email, ifyou're using photo sharing,
you're using cloud.
You don't have to go throughthat anymore.
You've already got a sort oflevel of understanding.
Now you can also get bad habitsthat you have to sort of break.
What can it do and what can'tit do comes back to that
responsible piece, but it alsocomes down to what is it that
you want to do in your business?
So for us, I like the fact thatpeople have played around with
(10:32):
it and have some level ofknowledge, but it's that level
set that's still quite importantthat we need to do.
Speaker 3 (10:37):
And Julie.
We mentioned Co-Pilot.
Probably a lot of people arelistening to this podcast.
Don't know what they are.
We're gonna see more and morethan the three of us are at the
leading bleeding edge of thisworld playing with it.
So, first of all, what areCo-Pilots?
What can they do Specifically?
What have you been doing in theCo-Pilot space?
Speaker 4 (10:51):
Co-Pilot is a series
of GNAI tools that Microsoft
have launched, and I think thefirst thing to note is there's
multiple versions of them.
So the main one that peopletouch straight away is Co-Pilot
for Microsoft 365.
And I think they're quiteclever in the way that they've
named the tool.
So it is this assistant thathelps you with your work, and
(11:13):
that's the whole way thatthey've positioned the Co-Pilot.
So it's an assistant on yourjourney as an employee, in what
you do day to day.
So the Co-Pilot in Microsoft365 works across PowerPoint,
word, teams, et cetera, andhelps you summarize information
or prepare first drafts ofdifferent reports.
But then there's a series ofother Co-Pilots that are coming
(11:34):
through the Microsoft ecosystem,so in things like Power
Platform and GitHub.
So we're gonna see lots of them, and I think this is the big
issue for employee adoption andfor organizations is that you
can't always be a breath ofeverything all of them at once,
because the pace at whichthey're coming out is so fast.
And now I think in the Co-Pilotinfrastructure you can easily
(11:57):
create your own Co-Pilot, so thespeed at which these are being
created is phenomenal.
Speaker 3 (12:02):
Some of our listeners
may have seen that this year's
Super Bowl there was an ad for aMicrosoft for Co-Pilot, so now
people are talking about this,they'll probably be asking
people at Microsoft what itmeans.
That for me is, when it's on aSuper Bowl like crypto, all the
ads last year were crypto.
This year, co-pilot was an ad,so it's a thing.
Speaker 4 (12:20):
It's a thing exactly,
and I think because there's
Co-Pilot, which they'verebranded in Bing, so it is an
everyday tool.
Now.
You don't have to be in anoffice environment to be using
Co-Pilots.
Speaker 3 (12:32):
When I speak with
clients, I talk about the need
for an AI executive council tohelp coordinate AI strategy and
execution across the enterprise.
So, Julia, do you agree with meand are you seeing clients
establish these, as theyunderstand how deeply AI is
going to impact every part oftheir business?
Speaker 4 (12:46):
We do agree.
I think the issue, likeeverything, is in the execution
and set up of them.
So we're seeing very goodexamples and then maybe some
poorly executed examples.
I think maybe in the futurethose councils may not be needed
as AI becomes a core part of wedon't need an email council,
for example.
Exactly, but in the short term,yes, I think there's a strong
(13:09):
need for them.
But I think the biggest dangerthat organisations face when
they set them up is not beingclear on its purpose.
So, I think, being very clearon what the purpose is and how
it interfaces with otherprocesses and committees in the
organisation, and making sure ithas the right autonomy.
So there is a little bit tothink about to execute them well
(13:31):
.
But, yeah, I think, given thepace of change, the
multidisciplinary focus that weneed, it's a very good thing to
put in place.
Speaker 2 (13:40):
I mean, the council
term is kind of grandiose, isn't
it really?
We've seen a lot of clients whohave set up working groups, at
least right.
So think of it in multiplelevels.
The council fees, absolutely.
I think that that's going to bereally important over the next
couple of years or so, and thenit'll just be mainstream.
But as a starting point, a lotof folk have already got the
working groups with amultidisciplinary team, so it's
(14:03):
not one person that's in chargeof it and they're going to
dictate how it's working acrossthe organisation.
They are a working group withmultiple lines of service,
multiple business units,whatever else already embedded
in, and I love to see that.
Speaker 3 (14:15):
I think it's also a
coordination piece.
I've spoken to clients wherethey uncover other people doing
the same thing, and when I wasat Tolstern Australia remember I
was in a room like we are todayI brought six groups in all
doing something around smallbusiness.
We decided at the end of themeeting we're going to do it
once rather than six times, sopart of it is like shadow IT.
People are playing with it.
We've got a chat EPT, we've gotsome open AI.
Do you think there's a danger,though, that if you don't have
(14:37):
that coordination, people justgo rogue, and then we've got
issues with GDPR and dataleakage and all those sort of
things?
I mean, what are the thingsthat customers, what are the
things that clients should lookout for when even playing with
and experimenting with AIprojects?
Speaker 4 (14:52):
There's a lot of
experimentation happening.
I think one of the biggestdangers is that organizations
almost have too many use casesand those use cases are quite
siloed.
So I think there is a role toplay in the coordination.
It's a tough balance becauseyou want to encourage
experimentation.
I think it's very hard tounlock the value without having
(15:13):
that experimentation.
But at the same time, wherewe're seeing better success in
organizations is where they lookat a use case and an
application and it might say bein a function like HR, but then
they're able to apply it acrossthe business and so they think
about how a similar and that'swhy we've started to use the
phrase pattern could actually beapplied across finance
(15:35):
processes or marketing processesin a similar way.
So it's not just the danger ofgoing rogue, but it's actually
the danger of missing out on theopportunity if we don't have
more coordination across thedifferent groups.
But again, it's that balance ofsharing but not controlling and
stopping the experimentationthat's the difficult balance to
(15:58):
strike.
Speaker 2 (15:58):
Couldn't agree more
and really that balance of
allowing your user base toexperiment, to play around with
technologies.
You're going to get somefantastic ideas coming out of
that as well.
So too much control can stifleingenuity and innovation, but
too little, as you said, youwill see sort of rogue stuff
going on over the place.
(16:19):
The worst situation is that yourepeat effort and spend money
when you don't need to right,and that's not a good place to
be because that erodesconfidence in the technology.
Speaker 3 (16:28):
So, julie, what are
some of the common challenges
that organizations face whenthey're integrating AI into
their business strategies, andhow do you help them overcome
these challenges?
Speaker 4 (16:36):
The universal
challenges that we see in the
short term probably fall intotwo buckets at the moment.
One is this concept of value.
So there's been a lot ofinitial experimentation and
organizations have beencomfortable to do that with
small numbers.
If I take Copilot as an example, roll it out to the 300 users,
see what value, see if theyenjoy using it, and so it's
(17:00):
employee reaction.
That's been the measure upuntil now.
Now that's pivoting into.
So what's the actual ROI?
So there's a lot of focus on howdo we actually determine the
real value here, particularly ifwe scale it, and I think what's
really hard with a lot of theAI tools that we're looking at
at the moment is that they'renot substantially changing a job
(17:23):
.
I would just good news foremployees, but from an ROI
perspective, we've got to thinkabout it quite differently to
FTE reduction, and so if youhave incremental improvements in
time, how do you unlock valuefrom that?
And I think what organizationsare struggling with is that link
between some efficiency andtime saving on one hand but more
(17:46):
productive employees on theother, and actually quantifying
and even describing that.
So that's a huge area fororganizations at the moment, and
the other one that's thefoundation piece is obviously
around data leakage, dataprivacy, ensuring that these
tools don't allow our employeesto access the wrong information
(18:07):
in the wrong way.
And without having enoughconfidence in that, it's really
hard to even start in this area.
So I mean, how do we help?
We're a big proponent in safe,short experiments.
I think it's really hard on thevalue side to sit in a room and
come up with 150 use cases.
(18:27):
Yet we've heard of lots oforganizations that have done
that, but then I don't knowwhere you go with that.
And then, at the same time,where we've seen the best, the
fastest adoption, the fastestidentification of the data
issues is where there's a safegroup of employees that are
testing it and now uncover theissues.
(18:47):
So we're a big believer insmall, safe tests to uncover the
value and the data issues.
Speaker 3 (18:54):
I'm sure listeners
are interested in.
Where are those quick wins?
I mean, initial studies haveshown the effective use of GenAI
.
You'll find in-custom interfacepurposes like customer loyalty,
satisfaction retention,reducing customer churn.
That impacts both the bottomand top line.
But Darshan, are you seeingother areas?
Or are these the ones wherethere are quick wins?
I mean, I love the safe quickexperiments, but are there areas
(19:15):
that people should just focuson because that's a no-brainer
as to where they should applythese tools?
Speaker 2 (19:19):
There are a few areas
that are coming up time and
time again, so we've mentionedthem a few times in some of the
material that we push out.
It's marketing.
Marketing is definitely beingdisrupted quite a lot.
Things like creating copy, forexample, creating images.
That's fundamentally changingwith AI and the multimodal
elements of AI.
I think customer service and thecontact center that's the other
(19:41):
area that we're seeing a lot ofexperimentation in, and I think
that's going to be a big bang,and it's not about just reducing
the number of people in yourcontact center and having a bot
do everything for you.
It's actually augmenting thehuman in lots of cases.
That's the initial cases, I'msure, and we'll see a better
experience for the personphoning in, a quicker time to
(20:02):
resolve, for the person that'sphoning in or putting in a
message, and I think that's agood thing.
And then from that we can learnand figure out what's the best
way to do these sorts of thingsin the long term.
And the model is also gettingbetter on a daily basis, so
there's more applicability toother use cases.
But frankly, right now,marketing, contact center, hr
(20:23):
those are the areas, thoseshared service areas where you
augment the human, keep thepeople in the loop.
They're the areas that we'reseeing the biggest bang for our
buck, but also you mentioned it,julia it's going to be where AI
gets embedded into technologies.
That's the other area thatpeople are now starting to look
at, and AI has taken up a lot ofoxygen in every single media
(20:43):
outlet, every single thing thatyou can think about.
There are other technologiesout there.
Speaker 3 (20:48):
No, yeah, there are a
couple Really.
Oh my goodness.
Speaker 2 (20:51):
Just one or two maybe
, and they're going to see a
resurgence because of AI, Thingslike IoT.
We could do lots with IoT andIoT has been around for a while
Now.
You've got AI and IoT and theway that you can interrogate
that data is changed.
You'll see a resurgence of IoT.
You'll see a resurgence ofblockchain.
You'll see a resurgence of someof the other technologies
(21:12):
around the edges, and thenyou'll see a few new
technologies popping up thatcouldn't have worked without AI.
Speaker 3 (21:18):
Data quality in the
enterprise has been a constant
challenge long before Gen AI.
So what do you see from clientswhen they realize they don't
have sufficient quality of dataor to properly train the models,
or the data simply isn't up toscratch, or what I call AI ready
?
Let me rephrase that.
So how important is dataquality and what are clients
doing to bring it up to speed?
(21:39):
So it's AI ready Without gooddata.
Speaker 2 (21:42):
there is no good AI
there just isn't.
If you don't get your datastrategy right, if you don't get
the sort of quality of yourdata to a point where it's
actually valuable, then justputting AI on top of it it's not
going to fix anything.
You'll just get to a baddecision quicker, really.
So getting your data strategy100%, that's what you should be
(22:02):
doing.
How you get there might change.
There are tools now that areavailable that are based on AI,
that can help you scrub andclean, etc.
But also, do you really need toboil the ocean and fix all your
data in the entire organizationbefore you use AI, or do you
fix it in one area?
Unlock that potential with AIand then use that as a catalyst
(22:22):
to change the other data?
Those are the conversationsthat we're having now.
Speaker 3 (22:25):
Large language models
or LLMs.
First of all, maybe you couldgive us your own definition of
what it means, but what are youseeing when it comes to domain
or industry specific LLMs?
Where will we see them evolvein specific industries?
Speaker 2 (22:37):
LLMs.
That's a very complicated thing, so I might not go into the
technical details of how youbuild your LLM from scratch, but
there are a number out there.
We are not building fromscratch LLMs here directly.
We don't need to do that rightnow.
There are plenty of options outthere and we work with most of
the players that are out there.
So we've got the open AI piece,we've got the Google piece,
(22:59):
we've got all the lovely stuffthat's coming out and hugging
face and available in Amazon andBedrock and so on.
So for us it's let's just usethe base foundational elements
and there'll be different onesthat we use for different
purposes.
I really like the meta one, forexample.
I think that the LLM models arevery good model for certain
things, but actuallyunderstanding the whole tapestry
(23:19):
of what's available to you isgoing to be quite key.
But then how you take thatforward and then creating that
specialization.
We started to see that inindustry already and we've
mentioned Harvey already acouple of times.
Harvey is essentially.
When it first came out, it wasthe legal version of chat GPT.
All right, fantastic use caseactually plays really well with
(23:40):
a lot of clients who have aninternal function around that
space.
Let's move that forward.
What else could they do?
Well, you could see Harveymoving into a couple of other
areas quite adjacent to thelegal space very, very quickly.
We've mentioned the fact thatyou can create your own GPTs as
well, because OpenAI haveallowed you to do that.
You'll continue to see that andyou'll see these specialized
(24:00):
models coming out.
Remember, one model isn't goingto fix everything for you
either, and you've got to getused to the fact that you might
have multiple models in yourorganization and you might stack
models on top of each other toget to the outcome you want.
So, for us, industry is the wayto go.
We've seen it in cloud.
It's a similar pattern thatwe've seen in a lot of other
industries, a lot of othertechnologies, so we'll see that
(24:22):
coming into fruition over thenext 12 months, I'm sure.
Speaker 3 (24:24):
And Julie.
In that people soft skillsspace, are these LNMs becoming
important, that we can tune themfor specific roles and
functions?
Speaker 4 (24:30):
At the moment I would
say yes.
So, if I continue with ourHarvey example, that's played a
very specific and easy to adoptuse case for legal teams or
those people that want to doresearch type tasks.
So I think it does help withemployee adoption because it's
speaking the language or it'sattuned to the tasks that they
(24:52):
would do.
It will be interesting as itexpands, though, and as we start
to look at use case or patternsin one area and try to apply
them to the other.
So it'll be a combination ofhow do we ensure that we get the
most value out of certain usecases, but then how do we have
the underlying data?
That is specific to my role orfunctional discipline area.
(25:15):
But, yes, I think thecombination of kind of
interdisciplinary thinking, butwith subject specific content,
is going to be kind of theutopia that we're looking for.
Speaker 3 (25:26):
Question for both of
you what's the most unique
problem you've seen solved by AIto date, by your clients?
Speaker 2 (25:32):
Some of the use cases
that we're seeing do have a
massive commercial impact.
So it would be wrong of me tosort of blurt out some of the
things that I'm doing with theinvestment banks.
But they're happening.
They are absolutely happening.
I think financial services areprobably leading the way again,
because they've been using theAI models for quite some time,
whether it's algo trading orwhatever.
They're just used to it.
(25:52):
The regulator's been on thatjourney as well, so I can see
that being a big, big use case.
But let's just go back to itContact center.
The contact center will befundamentally different in 12 to
18 months time than it is today.
Once AI becomes mainstream.
That's going to be the biggestunlocking of value for most of
our clients, because you canserve more clients.
(26:13):
You can serve them in a moremeaningful way and get to a
resolution quicker.
But the great thing is you'realso tracking all of this.
It's really hard to track phonecalls.
Very easy to track interactionshere and make your service
better and actually get rid ofsome of the problems before they
even happen.
I think that would be the bigone that I'll see a lot of
working over the next 12, 18months.
Speaker 4 (26:33):
The benefit of the
work that I do with Copilot is
that it's the universal tool.
So in some ways I'm looking atit more from how do lots of
different people use it withouthaving necessarily a strong
technical background, and Ithink at first, when people pick
up a tool like Copilot, there'sa lot of summary of information
(26:54):
, summary of meetings, summaryof documents.
I think where it's veryinteresting now is to see how
everyday users start to actuallygenerate content with it.
So that's the more interesting,I think, use case, and it is,
though, breaking it down intofirst drafts or small pieces of
work.
So the art to Copilot is not inthe grand gesture, it's
(27:16):
actually in kind of small tasksbeing broken down effectively,
but then, when you look at thatkind of value chain, how much
more effective the output orquick the output is.
I think the nicest use caseI've heard of with Copilot is
really around employees thathave either a non-English
(27:38):
speaking background working inan English speaking team, or
that have certain like learningdisabilities that may hinder
their ability to understandmeetings that are spoken quickly
, etc.
And so there's a really verynice kind of use case where
employees have just feel veryempowered and bought them up to
a standard that they just seeimmediate results.
Speaker 3 (28:01):
I'll give you one of
mine.
I use an AI called otta ottaaito transcribe meetings.
This podcast will betranscribed with otta Only two
months ago that I work out.
I can talk to it.
I can go hey, add otta.
What are the key components?
Give me some quotes from thisand I'm using some of this from
my book and, rather than goingthrough 80 hours of podcast
recordings, I can actually getit to pull out quotes and
summarise and those sort ofthings, and that saved me so
(28:23):
much time and I'm now findingI'm pushing it harder and harder
to do things.
So my co-pollets I'm using areactually making my own job and
writing a book much easier.
So I think it's about playingwith.
Actually saw someone last nightwas on a board meeting.
They were using otta to recordthe meeting.
I said did you know about theadd otta?
No, so I think a lot of ourharm moments are going to happen
as people try and push thetechnology further and further.
(28:43):
Are you seeing that?
Speaker 4 (28:44):
Yes, absolutely so.
Again, I think we see it inthings like meeting summaries.
So you know, recap this meetinggives you a nice summary, but I
think it's when you actuallythen see things like what was
the quality of this meeting, howcould we improve the meeting,
what was the opinions of thisperson, what was their kind of
(29:04):
feedback on X, y and Z?
And so when you actually have aconversation with co-pilot and
ask them about certain aspectsof the meeting, it's impressive
how rich that information is.
So another area that employeesare often seeing benefit is with
their one note kind of notebook.
So they might have years ofbits and pieces of information,
(29:26):
meeting notes etc.
Stored there, and so they'renow using co-pilot to
interrogate that and pull outthematically certain trends or
topics that they just wouldn'thave the headspace to uncover on
their own.
Speaker 3 (29:38):
I've been writing a
journal for the last 15 years,
every day using day one.
It just freaked me out.
If I ran that through AI, I'dprobably find out how my whole
mood and everything changed overmy time in London.
That would be amazing.
Speaker 2 (29:51):
I've got a tip for
you, though, for your podcast.
So you're using Otter and doingthat sort of stuff.
One of the other things we'restarting to see is just language
translation.
So you've got your podcast.
You've got the stream of datathere.
You can push it through AItools and have it in four, five,
ten different languagesinstantly.
The models will get better, butwe've been playing around with
(30:12):
some of that right now.
So we've got a very lovelyScottish chat with a lovely
Scottish accent, and spoke aboutwhatever he was speaking about.
We put it through one of thetools, got it in German in his
accent still, and actuallybecause it was a video even done
the lip syncing essentiallyStuff that you would only see in
the Hollywood movies.
(30:33):
Now it's like one click away.
But, the gimmick aside, you'redealing with the German team,
the French team, the Spanishteam, let's say, european
languages, for now.
You can now interact with themin a way that you couldn't do
before.
You don't always have to speakEnglish.
Once the models get better andwe'll have some of the Asian
languages on there as wellimagine what that will unlock.
(30:54):
So I think that there is quitea lot going on that will be
super impactful, and AI isn'tjust about productivity and
those hard measures.
But that soft measure of justhaving your podcast in Spanish,
what would that do?
That would just open up a wholenew audience.
Speaker 3 (31:09):
Hadn't thought about
that, but the technology just
makes this so much easier,doesn't it, julia?
I read recently an article youwrote that GNAI is not a product
you can buy, implement andadopt once.
It's a concept, a paradigm,almost a complete new way of
thinking.
So what should companies bedoing now to prepare their
workforce for using GNAI andalso the impacts on talent of
GNAI?
Speaker 4 (31:29):
The overarching thing
that we talk about in the
employee adoption side is thatthis isn't a once and done
training exercise, for example.
So you have to really thinkabout it being the journey, and
I think where we've seen thebest success in organizations is
where they have an honestdialogue with employees.
There's a lot of uncertainty,there's some fear, there's some
(31:51):
distrust, and so I think havingthat honesty and having a
two-way dialogue is veryimportant.
I think, rather than thinkingabout training, I think it's
better to think about havingyour employees with agency.
So the power for them to pickup the tools, to use the tools,
to input into their developmentis very, very important.
(32:13):
So this concept of co-creationor experimentation, allowing
them to have a voice, is a veryimportant part of the journey
that I think.
With the speed at which we'veadopted new technologies, some
organizations have skipped overand now they're going back to
that.
So they're seeing that they doneed to revisit how we bring
employees along that journey.
(32:34):
But it's not through anannouncement, a training program
.
It's through constantreinforcement and constant
ability to give them the powerto make their own choices around
what they get involved with,how they might reskill, how they
might experiment, etc.
Speaker 3 (32:52):
So that's a good
point.
For many, the impact of GNAI isso deeply personal and often
invoking concerns about jobsecurity, skill relevance and
careers.
What can be done to addressthese issues with employees and
consumers?
Speaker 4 (33:03):
So I've worked in
workforce planning for a long
time.
I have a very optimistic viewon AI and the impact on
workforces at a macro level, youknow.
So I see the biggest issuesfacing us is actually that we
don't have enough people.
You know, when you factor inaging workforces, the need that
we're going to have even insocial care and the care sector,
(33:24):
and the amount of people comingthrough the system, we don't
have enough.
So I see AI as being a bit of asaviour to augment employees.
So at that macro level, I'mquite optimistic and I don't see
, in the short term, massivedisruption in terms of people
not having jobs.
But what I do see, and what Ithink we do need to be quite
(33:46):
concerned with, is the skill gap.
So those that have the time toexperiment, those that have the
confidence and the aptitude tojust get their hands dirty and
get in this, it's going to openup amazing opportunities,
amazing roles and amazing work.
For those that don't have thatopportunity, don't have the
confidence, don't have the jobat the moment that gives them
(34:08):
the time to do this, they couldbe left behind and I think that
skill gap is probably the thingI think is the most concerning
from an employee perspective.
So I think there's a real rolefor employers to have which
allows the space and time foremployees to re-skill in this
area.
Speaker 3 (34:27):
One thing I talk a
lot about is the need for
critical thinking If AI is goingto do some of the heavy lifting
.
I don't now need to summarisethe meeting, because a tool does
it for me.
Talk to me about how importantcritical thinking is, not just
now, but also in our educationsystem.
Should we be teaching moreabout critical thinking at that
early stage?
Speaker 4 (34:43):
I've been quite
interested for a number of years
on the multi-disciplinaryeducation system that's coming
through.
So, rather than specialisingeven in disciplines like law or
engineering, where you reallylook at an issue every semester
and you look at and you tacklethat issue from all those
(35:04):
different perspectives and Ireally think that's going to be
the future, when AI allows youand gives you that foundational
base knowledge so fast I thinkthe role left for humans is to
make the connections, to applycritical thinking, to have
enough technical knowledge to bedangerous, but I think it's all
(35:27):
those applications of things,even from our introductions,
that both of us have quite amulti-disciplinary background
and that's to our advantage.
So I think, as we work intothis new world, absolutely these
human skills are fundamentaland the sooner that we start to
bring them into the educationsystem, the better.
Speaker 3 (35:49):
So final question on
the future of work.
We were talking for a number ofyears about the future of work
being about remote working anddistributed working and those
sort of things.
What's the future of work underAI look like?
Speaker 4 (35:58):
It's early days to
understand, but I do think that
there's a huge optimisticbenefit of, and we talk about
productivity, but I mean in itsbroadest sense, so it can be
unlocking time to be morecreative can be a version of
(36:20):
productivity.
So for me, I think the futureof work is really resonates
around.
How do we get rid of a lot ofthe non-value-adding work that
we do?
So if you talk to any officeknowledge worker at the moment,
they never get to do their job.
(36:42):
There's so much internalprocess, internal meetings, the
meeting about the meeting, andwe've been talking about it for
a number of years.
You add the digital debt fromall the different chats and
things like that, and it'sreally hard actually to carve
out your time to do your actualjob.
So I'm very optimistic thatthese tools will help us go back
(37:02):
to that.
So it can take out a lot of thenoise of our jobs, it can take
out a lot of the process and itreally allows us to go back to
how do I as an individual andhow does my team add value?
And I think part of thisjourney actually is reflecting
on how do I as an individual andme as a team, what is the value
(37:25):
that we create.
So that way, if AI can do someof the basics, where is it that
I refocus my time?
And whether that's doneremotely or in short weeks.
I think all of those differentaspects are now probably going
to be on the table, but I dothink at its heart, it's really
(37:45):
about a clear understanding ofvalue and focusing on how you
achieve that.
Speaker 3 (37:51):
And Dashain.
What's the future of work underAI look like for you?
Speaker 2 (37:53):
There's a couple of
points I like to make, really on
the future of work.
Under AI it could get very easyto work remotely, it could get
very easy to be super productive, but actually that human
connection, that actually beinghuman a bit, interacting with
your colleagues, your friends,family, that becomes just as
(38:14):
important.
I think you can be asproductive as you like, but that
spark of creativity justsometimes comes when you have a
conversation, when you interactwith someone.
So I think that that's going tobe quite important and offices
might change to be more likecreative hubs, more than sort of
where you sit for eight hourssort of tapping away at the
screen.
That might be one thing, butit's also the mindset of
(38:35):
organizations that need tochange, and one of the earlier
questions you asked about youknow sort of what does it look
like for the workspace as well?
I think the future of work isgoing to change into two little
things.
We are going to have to have amindset where we are used to
constant evolution, withperiodic moments of revolution.
(38:57):
So that's how it's going towork, right.
So you're always changing,always adapting, always taking
on something new and embracingthat, and then, every now and
then, you have to throughoutyour operating model and do
something completely differentand not be afraid to do that,
and companies that do that willprogress and flourish and
actually be attractive placesfor people to work.
(39:18):
And then the other piece is justthat mindset of the individual
I value and I'm looking forpeople in my team.
I'm not looking for the besttechnologist I've ever seen, or
the best X or the best Y.
I'm looking for the bestproblem solver.
I really don't care whatdiscipline they have.
I want them to look atsomething you talked about,
creative thinking, but it's likehow do you solve that problem?
(39:39):
What's the methodology you usein your head?
I haven't found an AI that cando that just yet.
Maybe it'll come and I hope itwill, but right now I value
problem solving and we don'tteach enough of that at school
at the moment.
Speaker 3 (39:51):
So, darshan, your
title is Emerging Technologies
Leader.
What can we expect beyond AI?
What's emerging that we may nothave heard about in the media?
Speaker 2 (39:59):
Some of it's in the
media already, but you cannot
ignore what Apple are doing anyminute now.
So it's out in the US already,but when Vision Pro takes over
globally, that's fundamentallygoing to change the way that we
interact with technology.
I always think of it as we'vegot our phones in front of us,
all of us.
You can't see it on the podcast, but we've all got our phones
(40:19):
in front of us just about, andwe're always looking down.
When the Vision Pro and othertechnologies like that, we're
all going to lift our heads up.
That's the first thing.
We're going to look forward,not down.
I think that that's going to bea fundamental change, and I
think some of the technologiesbehind the scenes and we've got
a whole series of things calledthe essential eight go look it
up.
It's quite good.
It's the resurgence of some ofthese technologies that will
(40:42):
kind of in the background,they'll be super charged with AI
, iot, blockchain, web3.0,everyone stopped talking about
that, but that could be a thingas well.
Those types of technologies arecoming up and then out on the
further radar, just differentways of working.
Like him or not, elon Musk hasa lot of business out there and
(41:04):
one of his, the Neuralinkbusiness is kind of the next big
thing, I think.
So I'm going to struggle withthe word, but Neurothropic
computing is probably somethingthat might be coming up,
probably start off with the sortof diversity, inclusion sort of
areas, but actually it'llbecome mainstream over time.
Speaker 3 (41:21):
On the Vision Pro.
I'm glad it's version one,because the iPhone version one
was quite limited.
The iPhone 15 has a lot offeatures.
I'm hoping that we're not goingto meet in a meeting where
everyone has these big skigoggles on.
I'm hoping it's going to besome sort of thin film layer
contact lens we can look throughso as I'm looking at you, I'm
seeing everything as well.
I think that's what it's got toget to, because putting that
barrier in front of it, thereare some amazing memes of people
(41:43):
on the subway driving with thisthing on.
It's funny.
Speaker 2 (41:46):
The technology will
get better, the hardware will
get better and faster, etc.
You know, we've seen a matterof a launch, the collaboration
with Ray-Ban, for example.
So that does look like a normalpair of glasses, and we're
going from artificial realitythrough to sort of augmented
reality, through to sort ofmixed reality, and once these
things do get smaller and easierto use, I'm pretty sure we'll
(42:09):
see a version of that in thenear future.
But yes, you're right, rightnow they do look quite
intimidating, don't they?
But we'll see what happens.
Speaker 3 (42:18):
So technology is
evolving just so quickly.
How do you both stay informedand continuously updated, and
how do your skills update toremain the forefront of this
rapidly involving field?
Speaker 2 (42:27):
I do two things.
Number one I read a lot.
I have my blocked out times foractually catching up on stuff.
Julie, you mentioned that wedon't always get to do our job.
I actually block out time to domy job and I think that that's
quite important.
But I also learn from thepeople around me clients,
conferences.
I think the more I absorb, thebetter I understand what I need
(42:49):
to do and then I can give back.
Julia, how do you stay up todate?
Speaker 4 (42:52):
Yeah, similarly, I
pay a lot of attention as to
what my 15 year old nephew isdoing Would be my first phase.
And yeah, I try to do a lot ofreading.
I find it hard to find the time, so it's the conversations.
It's having as manyconversations as possible and
hearing the practical reality ofwhat's going on is how I do it.
Speaker 3 (43:13):
I'm lucky.
I probably speak to 20 or 30leaders a year just for the
podcast, and so I'm learning somuch.
Today, I've learned so much aswell, so thank you for that.
We're almost out of time.
We're up to my favorite part ofthe show, the quickfire round,
where we learn more about ourguests.
So for both of you iPhone orAndroid, iphone, android Window
or aisle Window I'm loving thisIn the room or in the metaverse.
Speaker 4 (43:34):
In the room.
Speaker 2 (43:34):
In the room.
Speaker 3 (43:35):
Good to hear For both
of you.
I wish that AI could do all ofmy form filling.
Speaker 2 (43:40):
Yes form filling.
Oh my gosh.
Yes, definitely.
Speaker 3 (43:42):
The app you both use
on your phone Spotify.
Speaker 2 (43:45):
Email.
Speaker 3 (43:45):
Julia, your biggest
hope for this year.
Next Happiness, Dushent.
The best advice you've everreceived Be yourself, Julia.
What are you reading at themoment?
Speaker 4 (43:53):
Iron Flame.
It's a dystopian future.
Speaker 2 (43:57):
Book of choice.
Actually, I don't really have abook of choice.
I have Tin Tin, which I'mreading to my daughter.
I love Tin Tin.
Going through the whole seriesof Tin Tins with my daughter.
Speaker 3 (44:05):
What's your favorite
Tin Tin episode or story?
Speaker 2 (44:07):
The one at the moment
is the one where they're going
to space, that seems to be.
Speaker 3 (44:10):
yeah, I can picture
it.
The rocket, the rocket one.
I love that.
That's the one.
The rocket Tin Tin.
Yeah, for both of you.
How do you want to beremembered, for kindness?
Speaker 2 (44:16):
That's a nice legacy
to have.
Speaker 3 (44:18):
Now, as this is the
actionable Futures podcast for
both of you, what threeactionable things should our
audience do today to prepare fora world of enterprise grade AI?
Speaker 4 (44:27):
Educate themselves,
experiment.
So don't get stuck in planningmode and listen.
Speaker 2 (44:34):
A mindset shift.
Don't be afraid of change andconstant change That'd be the
first thing that I would say andembrace new ideas and new
thinking wherever they come from.
And read the Essential Eightfrom the PWC website I have read
it.
Speaker 3 (44:47):
It's fantastic.
A fantastic discussion.
How can we find out more abouteach of you and your work?
Speaker 4 (44:51):
Probably best for me
is LinkedIn these days, so we're
trying to publish as much ofour thinking there and also on
the PWC website.
Speaker 2 (45:00):
LinkedIn from a work
perspective.
And if you're really that board, go onto Instagram.
You can see me posting aboutnature most of the time.
Speaker 3 (45:08):
Thank you both so
much for your time today.
Speaker 1 (45:10):
Thank you, you're
welcome, thank you.
Thank you for listening to theactionable Futures podcast.
You can find all of ourprevious shows at
actionablefuturistcom and if youlike what you've heard on the
show, please considersubscribing via your favorite
podcast app so you never miss anepisode.
You can find out more aboutAndrew and how he helps
(45:32):
corporates navigate a disruptivedigital world with keynote
speeches and C-suite workshopsdelivered in person or virtually
at actionablefuturistcom.
Until next time, this has beenthe actionable futurist podcast.