Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:08):
Welcome to the Time
and Motion podcast with me, your
host, Lee Stevens.
For over 25 years, I've workedwith businesses all over the
world to improve the technologyand the people within them.
In this podcast, I share someof my experiences and I chat to
guests who generously sharetheir stories of how to or, in
some cases, how not to live aproductive life.
I hope you enjoy the show.
(00:28):
So this is one of the firstsessions we've done on the Time
Motion podcast about AI, and itcertainly won't be the last.
This week I caught up withChris Dury, who's an old
colleague of mine, and we talkedabout the practical uses of AI,
(00:51):
how you start planning for itand what's the strategy involved
in using AI within a business,both large and small.
So really good session if youjust want to know about some of
the terms, some of the trendsand how you can start bringing
some of those conversations intoyour workplace.
Hope you enjoy the show, Chris.
(01:12):
Good morning, Welcome to theshow.
Speaker 2 (01:14):
Good morning, lee,
great to be here.
Speaker 1 (01:16):
How are you doing so?
We've got Chris Dury on theshow and I think, for any
interest of transparency, weused to work together.
Speaker 2 (01:24):
We did, we did, did
back.
Speaker 1 (01:26):
I would be almost a
decade ago, I think by now yes,
it was obviously interestingamount of people that went
through that indigent businessin different shapes, forms and
empire, and you know lots ofpeople dotted around in the it
community all around the worldthese days as well.
Speaker 2 (01:39):
Right, it's an
interesting alumni of of people
there's quite a lot um have goneon to form their own businesses
and um it's an interestingalumni of people there's quite a
lot have gone on to form theirown businesses and it's actually
really interesting to follow upwith them on LinkedIn and see
how everyone's going.
Speaker 1 (01:51):
It is.
It is Right, Chris.
I always like to find out alittle bit about the guests and
what their background was, sojust tell us about you, where
you grew up, what was your story, you know, early days.
Speaker 2 (02:02):
Well, you might have
detected my accent.
So I'm not um, not fromaustralia, didn't, didn't grow
up here.
I grew up in canada and so, uh,in the outer suburbs of toronto
, um, which if I say that likeI'm from toronto, would actually
say toronto, because that's howyou can tell if someone's
natively from there or not, ifit's more like a Toronto rather
(02:25):
than Toronto um.
So, yeah, grew up in in thenorthern uh suburbs of Toronto
and um was living there until Iwas about 25 um, and then moved
to Australia.
So my, my high school friend uh, grew up in Canada but was born
in Australia, so that was mylink here and decided to go on
(02:46):
an adventure and landed inAdelaide in Australia and within
I think it was about threeweeks, started working at OBS,
which was a SharePointconsultancy.
I didn't know anything aboutSharePoint.
I was a NET developer before.
I was a NET developer before,but I had no clue about
(03:06):
SharePoint or informationmanagement or anything like that
.
And yeah, from there sort ofwent on From the development
side, got more involved in sortof team lead of developers and
then he kind of progressedthrough to becoming an architect
and then eventually was a teamlead of an architect team and
(03:29):
then at empire, I was leadingwhat was called the digital
advisory team, and that wassolution and enterprise
architects.
We'd like to think ourselves asthe pointy tip of the spear, so
go in and get to the the youknow, with the client and and
try to uncover where the realproblems are and what kind of
solutions could could be broughtto bear against those problems.
(03:51):
Um, and then put together aprogram of work to to work on
that and what sort of year wetalking here uh, that was all
the way up to um 2020.
Uh, after um, I was kind ofspinning my wheels a bit there
and decided in early March 2020that I would resign from a very
(04:12):
nice role at Empire, and then,about two weeks later, we went
into lockdown and it maybedidn't seem like such a good
idea at the time.
But then, uh, shortly afterthat, I I found an opportunity
to um to join an aged careprovider and became a cio.
Uh, for two and a half yearswith them okay, and you talked
(04:35):
about the enterprise software.
Speaker 1 (04:36):
So we're going to
talk a little bit about the ai
sort of realm that you're you'reinvolved in now.
But I'm a big believer that Ithink there's quite a few of us
in the market, probably at theright age as well where I think
a lot of the tools I kind offeel like all the roads we've
kind of taken have led us tothis point in time and actually
all the stuff we've kind oflearned in that in along that
(04:57):
way kind of, are going tobenefit us.
And so I feel like you knowsome of the people that weren't
involved in some of thatinformation management and
architecture and all the kind ofgood kind of old-fashioned
consulting that we used to do,um, I think it's going to serve
us well.
But, um, do you agree with thatsentiment?
Speaker 2 (05:12):
I do I think um,
there's a term that I borrow
from the.
My wife's a pharmacist, so shespends a lot of time in kind of
health care, and they have anotion what they call the
advanced generalist, and it'svery much that kind of t-shape
that is maybe a more familiarterm for people in the
technology space, where you havea very deep expertise in in one
(05:35):
area, but you also have deepexpertise in multiple areas, and
I think that that's the roleyou're describing now and the
role that people fit in, that umto take best advantage of what
these platforms can offerthey're so wide ranging in their
functionality and capabilityyou really need to have deep
understanding of.
You know information managementin general, but then also it's
(05:56):
really helpful to haveunderstanding.
You know all the way from cybersecurity, all the way to user
experience and and experiencedesign and how people are
actually consuming these toolsokay, so let's talk a little bit
about what you're doing at themoment uh.
So right now I'm doing a couplethings, um, as it it, I think,
is probably common with mostpeople.
(06:17):
They've they've got a few umthings on on the fire, so to
speak.
Uh, so predominantly um doing,uh working on ways of, of using
ai to deliver better strategy,uh work for people.
And that's all the way fromstrategy formulation, which
would be understanding um theenvironment and what you might
(06:40):
do about that uh, through toactually conceiving the strategy
, so making decisions, and theninto strategic implementation.
So, as that strategy evolvesover time, how does it need to
change with what's emerging inthe environment?
Speaker 1 (06:55):
Yep, a hundred
percent.
And if we took a?
So if we look at AI from abusiness productivity
perspective, how do you see AItransforming a traditional
business like, say, a healthcarebusiness that you've worked in
over the next five years?
Speaker 2 (07:12):
there are like so
many um ways that this could
happen, so I'll probably justuse a few examples.
Uh, one example probably moreum at the front line maybe,
maybe a little bit more tacticalis in disability care service
providers.
There's a requirement that theyhave now to lodge a notice to
(07:36):
the regulator that they'veapplied what's called a
restrictive practice, and thismight be either through
pharmaceuticals or actuallyphysical restrictions that they
put on a client and those forms,or that notice has to be lodged
within a certain period of timeI believe it's within 24 hours.
So you can see the situationwhere the regulators applied a
(07:57):
new requirement on providers andit's very costly for them to
meet that requirement.
What some providers are doing isusing AI to scan their case
notes.
So when a provider or when acare worker is working with a
client, they'll be filling out anote to write what's happened
during that shift or during thattime.
(08:19):
They're using AI to pick out thefact that in that note there
was a restrictive practiceapplied.
Ai to pick out the fact that inthat note there was a
restrictive practice applied.
It's then pre-filling that formwith the client's details, with
the time and date and so on,and then sending that off to the
regulator automatically, and Idon't know the exact impact, but
(08:41):
you can imagine that it's hoursor even tens of hours a month
being saved for these providerswho have to meet these new
regulations and that's aninteresting example because it's
happening all the time,especially in funded providers.
There's a lot of regulationsbeing applied to them and a lot
of it is really in that sweetspot for AI to one understand
the case notes, to extract thefact that the restrictive
(09:04):
practice has been applied, tothen synthesize that into a
report, um, and then to do theautomation to send it back to
the regulator and an examplelike that as well.
Speaker 1 (09:13):
I mean, I, I'm
obviously a similar realm to
yourself, but my argument withthis stuff is that none of this
stuff's actually thattransformational.
It's been around for a fewyears, right, and it's just that
all of a sudden there's a bitof a uh, a focus on ai and what
it can do and everyone wants todo it now, and so you know, I
mean, you know that's a greatexample, right.
But you know, if I look at someof the technologies there, you
(09:35):
know what sort of technology areyou looking at and using to to
do that that example?
Speaker 2 (09:40):
so that one's a
couple things.
There's obviously theorchestration and integration,
like you mentioned, has beenaround for ages.
What's new is that it'sprobably way more easier than
it's ever been, and I think whata lot of people are missing out
on in large language models isactually understanding what it
is, what it is.
There's a few people who likenlarge language models to a new
(10:06):
type of computer and I thinkthat's.
I'm not sure if that's actuallytrue or not, but I think it's a
useful model to put into yourhead to try to think a little
bit differently about it.
Speaker 1 (10:16):
And so on that note,
let's for our listeners.
Let's assume that maybe theydon't have, you know, a degree
of knowledge that they wouldlike Large language modules and
small language modules, becausesmall language modules have been
in the news the last few weeksas well.
Speaker 2 (10:33):
So just explain to
the layman what those are.
Sure, so large language modelsis normally what people would
consider as ChatGPT or GoogleGemini or Cloud are probably the
most notorious ones, I suppose,and what those are.
The reason why they're calledlarge language models is that
they've been trained on a largedocument set so in this case,
(10:56):
billions of documents and that'salso created what they call a
set of model weights and if youcan imagine that as a cloud of
points and each of those pointscould be a concept or a term,
and those concepts then haverelations and that model is
modeling what those relationsare in a statistical way, um,
(11:20):
and then there's an algorithmthat can walk along those
relationships and then itgenerates the text based on that
.
Speaker 1 (11:25):
So it's crunching
like literally billions and
whatever.
The next one up of those islines of data right and
digesting it very, very quickly.
Speaker 2 (11:35):
Yeah, that was the
main breakthrough.
So the GPT stands forregenerative pre-trained
transformer and thatarchitecture is interesting.
If you have a bit of adevelopment background, you
can't understand that there'sloops and that you have to
process information kind ofsequentially.
You have to go through one at atime.
And what the GPT architecturedid, or what the transformer
(11:57):
architecture did, was created avery efficient way to process
all the information at once.
So when you are to process allthe information at once, so when
you are using an LLM, everyword it generates it's looking
at all the words generatedpreviously to that to then
generate the next word.
So small language models onthat would be smaller versions
(12:19):
of that where they're maybe morenarrowly focusing in different
domains.
So that might be on audio or itmight be in imagery.
It could also just be inspecific languages or other
implementations.
Speaker 1 (12:35):
The best one I've
seen recently and for me this is
a bit of a game changer, bothwhen you consider my backgrounds
is actually your internal data.
So if you think about it, aboutthe data that your organization
holds is generally not thatmuch compared to the rest of the
world, but if you think aboutit, if you can then go and point
an agent, for example, like aco-pilot agent, at all of your
(12:55):
organizational content and makesense of it, that's a game
changer right for a lot ofbusinesses.
And I actually think that'swhere a lot of businesses will
start, because they'll go oh,hang on, we've got all this
information, let's do somethingwith that first and then let's
tackle the world next.
So, yeah, any strong thoughtson that?
Speaker 2 (13:14):
So I think it's
important, when taking that
approach, to consider thequality of data that you have.
Sure, yeah.
So the common use case is let'sdo that and point it at our
policies and procedures.
Because we have a high turnover.
We've got people who come in.
They don't know what ourprocedures are.
Nobody reads these things, butit's really important for people
(13:37):
to follow them.
There's a massive assumption,that one you have the procedures
written down, which oftentimesyou don't, so the procedures
will be out of date.
So there's an effort to refreshthose.
Secondly is getting thoseprocedures put into a way that
the large language model canprocess those.
(13:57):
So what it does is it stores itin usually what's called a
vector database and that letsthe chat agent.
So you ask it a question, ittries to understand semantically
what are you asking.
So it's not necessarily lookingfor keywords, it's trying to
say, well, what's Lee asking inthis situation?
And then looks for vectors inthe vector database that are
(14:21):
closest to that vector.
And so, just to be really clearand to keep our model in our
mind, so those vectors are thoseconnections between the points
that we were talking aboutbefore.
So it looks for things that aresimilar to that and then it
retrieves those back.
But if you've ever played withsome of these tools, you'll
notice that what they retrieveback may be more or less
(14:45):
information than you want, so itmay take a snippet out of that
procedure.
That's not the completeprocedure, so then you've got to
click on it and then go andread the whole thing, which gets
you back to the originalproblem.
When I think the idea thatpeople have in their heads is
they want to say what do I do inthis situation?
And for the, the large languagemodel, to say here's the 10
(15:05):
steps you need to follow rightaway and I think it's getting
better.
Speaker 1 (15:08):
Right, because, I
agree, you know what the returns
.
It's like probably the earlydays of search, where you know
you get to.
Yeah, you know.
It's like the prompts.
I always say you know you.
If your prompts are like, ifyou just type in blackberry, how
does it know you mean the phone?
Do you mean the fruit?
You know just.
You know just, just, give it abit more.
Those two extra pieces ofinformation can make all the
(15:28):
difference.
Um, but you know, I think, likegood example, um, I'll probably
give there is, it's gettingbetter.
And there was a really classicexample with a business that
I've worked with recently andthey, um, I won't give their
name away, but essentiallythey're one, one of the biggest
repairers of photocopiers, andyou go oh, that business is dead
.
You know they're dead and worn,but they actually don't say
(15:50):
that's what they do.
They say we fix anything, theyprint something.
So think of the things that areactually in the ascendancy
self-service checkouts, you know, parking machines, you know
where there's no humans there,but they're still printing
something.
Kiosks at the airports, and sothat business is going boom.
They support something like3,000, I think 3,000 different
devices and they've got manualsfor every single one of these
(16:13):
and we were working through somescenarios and I said, well,
look, just point your agent atall of those manuals, because at
the moment it takes about twohours for an engineer to get the
manual, then find the piece inthe manual.
It might be 300 pages long.
So we just did a couple ofscenarios and run some tests
there and for them it was like Ineed to fix the feeder tray on
(16:33):
the you know, I don't know Ricoh1234 machine.
So they put this query in.
It was a little bit loose andwe got better with the prompts,
gave them that informationwithin like seconds, which was
previously a two-hour job.
So I think but you know it stillwasn't perfect, but I think
it's getting better and better.
And I look at like copilotexample, which is one I work
(16:56):
with quite a bit, if I comparethe results and the returns
we're getting now compared tomaybe february.
You know it's they're're just,you know, so much better.
So, um, so, yeah, I think.
But I think, um, yeah, that'smaybe the training the system's
important as well.
Speaker 2 (17:11):
There's been some
technical advances that have
enabled that.
So, um, you might've heard ofsomething called the context
window.
That's the um amount ofinformation you can exchange
with a large language model,which includes your initial
prompt and then any additionalquestions that have come after
that, and originally, I thinkChatGBT 3.5 had a context window
(17:34):
of 4,096 tokens.
The latest versions are up to amillion and what that means is
you can then take the entire setof all the manuals, because a
million tokens is actually a bignumber.
You can take that whole entireset and send it in the context
window and then have it generatethe piece of information that
(17:56):
you need, which gets you muchcloser to that experience where
it's not only processed thatinformation, it's understood it
and it can synthesize a newversion of that information for
whatever situation you're facing, right, now and I think, yeah,
you can.
Speaker 1 (18:10):
You can see that the
difference is as soon as you go
straight away.
Um, going back to businesses.
So even in 2024, my suspicionis there's still an element of
fear about ai and and theunknown.
Is that a fair statement andwhat's your thoughts on that?
Speaker 2 (18:28):
I think there are.
I totally agree with that.
I think there's initialexcitement.
That happens.
People see the demos.
You know your average executivegets excited.
They want to do something boldand daring.
And then they kind of hit thewall of reality that they don't
have good policies around howthey manage their information,
(18:51):
they don't actually knownecessarily where all of it is
all the time and a lot of theseAI platforms you can't run them
yourselves.
So the idea that you can put itin your own data center and
kind of put your arms aroundthese things there's very few
people that can actually do thatand that's typically
(19:13):
governments or militaries orlarge organizations.
There are smaller models youcan run yourself that are open
source, but they don't performnearly as well and you also
incur all of that cost ofrunning those systems.
Speaker 1 (19:29):
So I think the you
mean, like you own a private AI.
Speaker 2 (19:35):
Yeah, you can go buy
a bunch of GPUs and set up a rig
at home and run some of theopen source models that approach
the level that GPT-4 or Cloudcan get, but it's not nearly as
performant as those ones are.
I do see that, like anything,there's that initial peak of
(19:57):
expectation and then there's atrough of disillusionment and
we're probably in that spaceright now for business of
disillusionment and we'reprobably in that space right now
for business.
They they've seen all the hottakes of ai going wrong and
they've maybe been scared bytheir cyber security people
about um, you know where is thisinformation going and do you
have a good handle on it?
And I do think that some of theuh use cases haven't been
(20:21):
compelling enough because a lotof them have been focused on
marketing or call centers orthings like that, where
typically those systems havemarketing.
There's generally very littlekind of privacy worries there.
And then call centers they'renot necessarily dealing with
sensitive private informationall the time either.
(20:43):
I think in the coming months orin the coming years you'll see
a lot of that change.
I think you'll see a lot oforganizations apply the missing
gap that the sort ofout-of-the-box large language
models have that people need todemonstrate full end-to-end
management of their informationand you can see that with what
(21:04):
Microsoft's done withCopilotilot, that it's the
contents hosted in yoursharepoint tenant or in your
microsoft environment, andthey're very transparent about
who's got access to that andwhat the life cycle of that
information looks like yeah, I'mnot.
Speaker 1 (21:19):
I'm doing a fair
amount of education on copilot,
but I actually say you gottathink of it as two, two, two,
two in one.
So I say you know, if you thinkabout um, you know co-pilot for
your internal content each year, super organized ea secretary,
you know pa that knows whereeverything is, knows exactly
what's going on in theorganization, you know and can
(21:40):
put their hand on on informationimmediately.
Whereas, um, the externalelement of CodePilot for me
reminds me when I was a workexperience guy in a record shop
and I used to get sent on allthe errands to go and fetch
things from outside the shop notactually working in the shop
that much and so I feel thatthat's the analogy I use.
But I feel that the internalstuff's probably, as I say, I
(22:04):
think that's where the biggestgains are going to come.
But you touched on some of themyths, for example, that are out
there, and the one I reallyliked recently was I think they
call it the so-so effect orsomething like that.
So what they're saying is thathumans aren't as stupid as
people think and the systemsaren't as clever as they think,
and then you end up with thiskind of you know middle land,
(22:26):
but what's some of those otherkind of, I suppose, urban myths
about ai that you're hearing andseeing, that always kind of
make you chuckle uh.
Speaker 2 (22:35):
so yeah, I'd say hot
takes um, and I do love seeing
the memes where ai's gone wrongand it's just completely either
done something like reallyinappropriate or has really let
people down.
So I think some of theexperiences in the US around the
drive-through I was actuallyback there in August last year
(22:57):
and did go through a McDonald'sthat had an AI drive-through and
it was painful, it just didn'twork and there's other ones.
You know the images.
You know google was generatingimages of diverse nazis, for
example, which, which isn't real, um.
But I think a lot of those mythscome from a fundamental
(23:20):
understanding of what llms areand, to put it really simply is,
if you treat these things likea database, you're using it
wrong.
It's a reasoning engine.
You're meant to ask it to makea decision or to evaluate
something, or to take these fivethings and make something new
(23:40):
with them.
Asking it when did man land onthe moon?
It's not necessarily going todo that very well, because
that's not what the architectureis designed to do.
It's probabilistic.
It's not something that willalways guarantee you the same
answers.
So you may have tried that.
You may have asked the samequestion one day to the the next
(24:04):
day and it's a completelydifferent answer.
That's actually a feature, umand this is where I find it
quite interesting for thingslike strategic planning or for
brainstorming or creativity.
Um, you actually want it tohave some variety and you want
the language model to suggestthings that you've never.
That may be impossible, becausethat helps promote some
(24:24):
creativity in yourself.
Um.
So I think when, when peopletry to approach these things
like a database and it's nottheir fault necessarily cause
that's kind of what they'rethey're put out there to do Like
it's your co-pilot, it's goingto help you to do everything,
and it's your Oracle, it's goingto answer all your questions
yeah, um, but that's not theright way to use the tool.
Speaker 1 (24:46):
A couple of things
there, though, so you mentioned
around the like example, thequality.
One of the funniest ones I hadin the last few weeks was I was
writing.
I needed an email written andmy English is very much London
English, so you know it leaves alot to be desired when it comes
to professional email sometimes, but yeah, I do my best and so
I wrote this email.
It was looking pretty good, but, you know, being a co-pilot,
(25:09):
everything was spelled with a Z,so you know, like
synchronization, yeah, yeah,yeah.
And so I literally just saidthat's fantastic, or something
like that.
Now rewrite it using AustralianEnglish, thinking, great, it's
going to correct all the S's.
It rewrote the email sayingG'day Dave, you're gonna adam
and eve, this can't take acaptain cook at this, this
attachment, and it like,literally put it in its slang, I
(25:30):
was like no, it's not white manby australian english, but you
know, taking it by by a letterof the law, I suppose it was was
right, um, so yeah, I, I hearyou on on on that and I feel
that, um, with, with some ofthese tools, they will get
better and they will not startnow.
No, humans, you know, dooperate.
Um, the other one I kind offeel that's probably something
(25:50):
to look out for is actually theuh, the role of you met you
touched on it just then like therole of uh, an ai tool as an
assistant to find thatinformation.
So I did start using this,interestingly because I'm so
annoyed with the researchresults on both being you know,
an edge and Google that you hadto go hunting, because adverts,
(26:10):
adverts, adverts, adverts.
You scroll, you know there's abit of information, you find
what you want, then there's moreinformation, so it's sandwiched
in between.
So I did, for about probably amonth, start using Copilot as a
tool, because I've just wantedit to, you know, to get get to
the point.
And then I think I was playingaround with the uh, I think bard
(26:30):
, and you know one of the othertools and so, you know, I think
I was looking at how we fix.
Oh, that's right, how do youfix?
Uh, moving to australia, wefound out that everything's a
metal joist.
You know, there's no, there'sno timber studs in the walls,
you know, especially inqueensland.
So I was like, okay, how do Igo?
And you know, attach, you know,a shelf to that stud wall.
So I went on to copilot so I'ma bard, I think it was.
It was really good.
It said go and find a magnet,you know, go and do this.
Real nice, um, you know niceset of instructions.
(26:54):
By the way, here's an advertfor some screws that you're
gonna need.
Here's an advert for a drillthat you're gonna need to do it.
And already, because I I justcouldn't work out where, where
the play was.
You know, and I think that'sthe answer there, right, is it's
going to service up?
You know, and I think that'sthe answer there, right is, it's
going to service up.
You know information and, bythe way, here's where you can
buy the screws and the bolts,and so I feel that there's an
ethical side that we probablyhaven't seen, or you know some
(27:15):
debate that we haven't seen,trudy yet, but I feel that's
starting to creep in.
Any other thoughts about that?
Speaker 2 (27:21):
Yeah, I think that's
a really interesting change in
consumer behavior that they'retrying to get ahead of that.
Certainly, people have gone tothese co-pilots or chat GPT and
asked for an answer, which is abit different than what you're
doing with searching.
With searching, you're tryingto find something that's going
to describe the answer, whereaswhen you're using a chat agent,
(27:45):
you just want directly theanswer.
So I think that it's probably areasonable response for these
organizations to then start tolitter them with upsells and so
on.
It's no different than whathappened in search.
Yeah, no-transcript, you'regoing to do it on Saturday
(28:30):
because you've got to wait forthis equipment to come and you
know these screws to come andI've already queued up a video
for you on how to do it and um,I think those are quite
interesting and the idea thereis also that your agent, um,
represents your interests.
(28:53):
I was listening to this podcastabout how they thought that paid
search would become much moreprevalent because of that, that
people would be annoyed by theupsells and wouldn't trust the
information because it's notvery transparent where
information comes from in alarge language model.
That, in order to preserve yourinterest, very transparent
where information comes from ina large language model that, in
order to preserve your interest,that the equation probably has
to change is that consumers willneed to pay for access to these
(29:16):
models to then know that theinformation is is being given to
them kind of cleanly or without, without bias.
Speaker 1 (29:23):
Yeah, and I think
that's where Microsoft are.
Probably.
It's the same with journalismat the moment.
Right, so you know you've seena shift back towards paid,
trusted sources, because peopledon't know is it left leaning,
right leaning, is it paid?
Who's funding it?
Whereas generally, you know thepaid journalism.
You know there's always goingto be some biases, I'm sure.
But but, yeah, I feel that thattransparency is going to have
(29:46):
to be there and I think that'swhat I was saying with Microsoft
is that you almost don't haveto worry about that because you
go well, I'm paying for it, I'mnot going to use my data.
They use zero trust policy, allthe kind of securities there.
So the fact that I have to payfor it probably means that they
don't have to use my data.
When it's free, I'm like arethey using it to train?
(30:07):
Yeah, exactly, have to use myday.
When, when it's free, I'm likeare they using it to train?
Yeah, exactly.
Yeah, and it was interesting.
There's been some debate.
Um, you know, in some of thework I've been doing and a
couple of the there's obviouslya lot of sas tools out there and
there was a couple that youknow because I was helping out,
you know in terms of selection,and I asked a question and
obviously they're all thesetools are now, you know, um,
shout from the rooftops about,about their AI capability and
(30:28):
what it can and can't do.
So I would say to them just askthe question, you know, can
they 100% categorically say thatthey won't use your data to
train the system?
And it's interesting, about 80%of the 10 we've spoken to
couldn't guarantee that theydon't know.
That's why we're using it,because, you know, we've learned
from one client so we can makethe feature better for the
others.
(30:49):
And so, yeah, I feel that that'sprobably going to be a
challenge for a lot ofbusinesses, um, especially
government as well, right, youknow, or you know, federal
government that's.
That's not going to work.
So, yeah, interesting, um, weyou touched on a bit, a bit of
this earlier.
But data quality, um, so that'ssomething that's obviously true
to both our hearts, given ourbackgrounds.
(31:09):
But how can you maximizeproductivity benefits of ai but
having good, you know, good,solid, robust, clean data?
Speaker 2 (31:23):
you mean getting to
clean, clean data?
Speaker 1 (31:26):
well, I think it's a
two-prong thing.
You're right, because youprobably need to clean what
you've got, and then it's a caseof well, how do you ensure that
the data's clean, pure, goingto give you what you need?
Because I feel that's a bigchallenge for a lot of business,
will be a future challenge fora lot of businesses when they
realize that this fancy AI toolthey're putting across all this
information, as you said earlier, it's the information not
actually very good, or it's outof date, or it's still talking
(31:48):
about fax machine to send, youknow, communications and that
kind of stuff.
So you know, I mean, how isthis a big big deal, do you
think, for a lot oforganizations?
Speaker 2 (31:57):
I think it's bigger
than they have anticipated.
Um, I think every organizationuh, and I can, I'm, you know.
What's coming top of mind to meis just a few problems I'm
dealing with at the moment.
A lot of organizations havemulti-terabyte file stores that
have 20 years worth ofinformation in it that they
(32:20):
haven't dealt with, and some mayhave records management systems
, and those may or may not bewell used.
Records management systems, andthose may or may not be well
used.
It seems to have fallen out offashion over the past maybe five
years and probably over the 10years, I'd say that
organizations are not putting asmuch effort into records
management as maybe they oncedid, and I'm sure there's
(32:40):
exemptions to that that variousregulated entities and
government agencies and so onwill be doing probably the
minimum of what they need to do,but there'll be a chunk that's
sitting in a file sharesomewhere and I've yet to really
see anyone deal effectivelywith this.
One strategy we're looking atusing is just migrating the
(33:03):
entire set into SharePoint orinto Azure files, and both
options give you the opportunityto run an AI on top of the
files to then unpack.
What are they?
You can do some rudimentarysearching for file dates to say,
well, this is 10 years old andit looks like a financial
(33:25):
document, so we can definitelyget rid of it.
It but it's probablyquestionable if that document
might be 10 years old and itmight involve, you know,
hazardous work material or itcould be client information that
still might be relevant today.
And without something like ai, Idon't really see people being
able to deal with that big blobof of data, which is, to me, is
(33:48):
more of a galatian problem.
That it's the people havehidden away.
They've put it on their fileshare.
Um, I don't know about you, butevery time I seem to go to a
client site, they're you know,they open up their windows
explorer and every file share islike red because it's nearly
full and they're constantlyhaving to like drop more disks
into their sand to try to justbuy a bit more space.
I think organizations reallyneed to deal with that.
(34:10):
That's actually from acybersecurity point of view.
That's what the CryptoLockerand the data exfiltration
criminals go for, because theycan get all kinds of information
that way.
Speaker 1 (34:25):
Well, I think that's
getting better as well, because
I mean, I would agree with thatmaybe two, three years ago.
But I think you know one thingthat COVID did do and the
lockdown did do is essentiallyaccelerate people's migration to
things like Teams andSharePoint, you know, because
all of a sudden it was, you know, quite appealing to have it all
in the cloud.
So, I kind of I would agree withyou.
(34:46):
You know, as I, probably, as oftwo or three years ago, but I'd
say probably 80 the businessesI talk to now don't have those
file shares.
They're using teams or they'reusing share but they use,
they've still got a file sharebut it's only really used for
that legacy stuff.
Or you know some of those oldtools.
They've got some macros andthey just don't want to touch
because it's like a deck ofcards.
So, um, but yeah, I think it'sgetting better.
But yeah, he 100% around thesecurity concerns that we're
(35:09):
doing.
But I was literally in aconversation yesterday and I was
saying to someone it's like,you know, because we talked
about Teams and the benefit ofchat I was like, oh, I'm just
going to sync here and it's justgoing to sit on one machine
like a file share.
I was like, but the constructsof a folder and a file just
won't be here in 10 years.
It will just be different,right?
And those like don't care, Ilove a folder, I'm just going to
(35:31):
work in that way.
So be interesting to see whatthose, some of those younger
generations that come throughthat just don't aren't used to
those ways of working and howthey adopt some of the new tools
yeah it, you know, will thatthat file folder paradigm
disappear um over?
Speaker 2 (35:47):
I don't know.
I'm not sure if the save iconshave moved from floppy disks yet
or if there's a few of themstill are floppy disks, I think.
Speaker 1 (35:56):
Yeah, yeah, so I mean
you talked about I mean that's
a good example of our kind ofdaily routine tasks.
What are some of the otherareas that you feel that AI can
play a role with improving thatfor most kind of of, you know,
information workers?
Speaker 2 (36:12):
I think, um, so I've
been doing a lot of work in in
the strategic planning space.
So how can you use ai to helpyou with strategic planning?
And?
Um, I think a lot of peoplehave an idea of strategic
planning is something that onlyhappens in the boardroom or
around executives.
It only happens once a year oronce every couple of years, but
(36:34):
if you can imagine, you wouldprobably go through the same
activities anytime you start upa project or anytime you have a
problem that you need to solveand it could be a BAU problem.
So you know to carry on withthis example.
So you've got some file sharesand you want to solve and it
could be a BAU problem.
So, to carry on with thisexample, so you've got some file
shares and you want to get ridof them.
You need a strategy for that.
It doesn't need to be acorporate strategic plan to do
(36:57):
that.
It might be just a small projectthat you're going to do, and
what I've found is that usingthings like LLMs to help you one
understand what the problem isin a really holistic way, but
then to start to do the planning.
They're very effective at doingthat and what sets them apart
(37:18):
from what normally happens isbecause they've been trained on
such a vast set of data, theytend to look at things without
bias.
And that's usually why we havegroup workshops or we have kind
of consensus meetings is becauseyou're trying to overcome the
bias that one person has.
They've got a blind spot,they've never done something
(37:39):
before or they're just notlooking in a certain area.
But if you ask an LLM to say,give me the exact number of
steps that I need to follow todo this project to completion, I
would bet that you would readthat list and see there's
something there that I wouldhave never written down if I had
done that on my own, becauseit's a blind spot for myself.
(38:00):
So I think for people to use itand I think that's where the
nomenclature around co-pilotscome from right is that it's.
It's there, it's helping you,it's kind of filling in those
blanks that you've got.
I think in those use cases it'sreally good, um, being able to
then say, okay, well, there'sthe 10 things that I need to do
now, one by one, what are therisks and opportunities and
(38:24):
steps that I should be takingwith each step and quite quickly
know in a matter of an houryou've got a comprehensive plan
that you would have had to go toa consultant to get, and often
the plans can be generic enoughthat it's enough to then move on
with a project when it getsspecific and you might need to
(38:45):
bring a consultant in, that'sfine.
You can do that later withinthat program or project, but
you've been able to make a start.
And the way I've observed peopledo this is they become much
more confident about yes, let'sgo on with this thing or let's
not do it, because they've beenable to see that full piece of
information, whereas I think fora lot of decision makers it's a
(39:07):
lot of problems are intangiblebecause they don't quite
understand it, they don't knowwhat the end looks like, and
it's a lot easier when you're inthat situation to just put that
problem off to say, okay, I'mgoing to deal with that one
later because there's actually amore quote, unquote urgent
problem I need to deal withtoday.
So me, what I'm like, what I'mactually trying to do, is change
(39:29):
that behavior in people to,instead of putting off those
important but non-urgent things,actually use tools like ai to
to understand how you mighttackle those those things today
and get moving on that, becausethat's actually what's what's
causing the problem for you whatwould be your top three
business cases that you've seenand you've set back and gone.
Speaker 1 (39:50):
You know that was a
good piece of work.
Is there kind of some commonthree that you can think of?
Speaker 2 (39:56):
So I did some work
with a client about six months
ago, which was around developinga set of foresight scenarios,
which is a way of doingstrategic planning that's a
little bit more at least in myopinion more robust than what
most people do.
Most people look at, they makeup some goals.
(40:19):
They might take extrapolationsof where they are today and say,
okay, let's just have 10 moreof those things or 100 or
whatever, and then let's workbackwards from that with our
plan.
What a scenario does is saywhat are your most uncertain
trends or most uncertain thingsaround you that you're worried
about?
Let's look at different storiesabout the future that involve
(40:43):
more or less of those trends.
So if you were an aged careprovider, you might look at
government funding in extremelyhigh or extremely low.
You might also look at, well,we don't know about robots.
Maybe humanoid robots is goingto be a thing.
Let's assume that there arerobots everywhere and then
(41:04):
there's robots nowhere.
If you put those two criteriaon a matrix, you then have four
distinct scenarios.
You've got one that's highgovernment funding, high robots.
You've got one that's lowgovernment funding and high
robots, for example, and so youcan create stories about the
(41:25):
future and then you can say,okay, well, what could we do to
be successful in all of thosefutures?
And that's how you build morerobust strategies.
Typically, these exercises wouldhave taken a number of
workshops, a lot of involvementwith experts, probably taken six
to 12 weeks to do um.
You can do this exerciseyourself, um with chat gpt in 20
(41:51):
minutes and the results arepretty good.
There's research that's beingdone now to do some qualitative
and quantitative comparisonsbetween things.
There's been a couple ofarticles from HBR so Harvard
Business Review about can AI dostrategy.
It's really fascinating to havea read of those things.
(42:15):
But in terms of use cases.
So the use cases we wentthrough was one around trend
scanning, but in terms of usecases.
So the use cases we wentthrough was one around trend
scanning.
So we used AI to ideate whatare some trends that we should
be looking for and then did somemanual Google searching and
some sense making of that.
And then when we did thescenario generation, we use the
(42:39):
AI to write the structure andthe kind of bare bones scenario
and then as a work group, wethen tweak those scenarios to
put in more features that wewanted to have inside those
stories.
And what was interesting was, byusing AI, the meetings that we
would conduct with the peoplequite quickly moved from so
what's happening to so what likewhat does it mean?
(43:02):
Whereas I find with a lot ofkind of group work workshops,
you spend nearly 90 of the timegetting to that what's happening
, because people are having togenerate it themselves, they're
having to come up with thoseideas, whereas we had the ai do
all that work for us and it wasreally just how does that make
you feel?
What do you think about that?
(43:22):
And that's actually like whatthe real essence, in my opinion,
of good strategic planning is,because it was taking the
expertise and the knowledge thatwas in the group that was in
that program, thinking about theso what rather than just the
what.
Speaker 1 (43:39):
Yeah, and I think
that there's some that it's
going to be, I think, a bit of ajourney for a lot of
organizations to be comfortablewith that approach as well.
So, like the one example I'vejust used is for a very simple
like and I like the idea thatyou talked about about that,
that blind spot, because I didone using it was for Teams
(44:00):
governance.
Let's just say, how do we useMicrosoft Teams?
They said, look, we don't needit to be like Ben Hur, we just
need something that gives ussome, you know, industry best
practices.
It was like the blank sheet ofpaper.
Like you say, five, six yearsago you probably had to have
written that from scratch,researched it.
So I did it, you know, tweakedit.
(44:20):
It did 80% of what I needed tostraight away, and then it
needed a little bit of tweakingand and it then was able to to,
you know, get people thinkingabout how does that work really?
You know what does that reallymean and and.
But you know it was good enough, um, and then that was probably
just a bit of a more, certainlynot at the strategic level
you're talking about, but Idon't see any reasons why you
couldn't use that same approachas well, and I think you know,
and that's kind of the problemyou're solving, I guess, with
(44:41):
your current gig, is that a fairstatement?
It is, yeah, and I think you'reright.
Speaker 2 (44:46):
I think you know we
were kind of knocking on the
lack of current policies andprocedures earlier.
There's no reason why youcouldn't use Copilot to
regenerate those quite quickly.
Like you say, you get to that80 percent.
Uh, I'm certainly doing thatnow for our clients.
I'm using ai to generate the,the skeleton of a policy
(45:11):
framework and then all the waydown to guidelines.
And what's great is you can dothe work and then also play back
that guideline, for example,back to the AI and say here's
the framework, here's theguideline Give me a critique on
that where it can then look forinconsistencies.
It can then make sure you'reusing the right language.
(45:34):
So, like, for example,guidelines, you want to be very
prescriptive about how you'redescribing things, whereas
policy you're sort of sayingthis is the outcome we want to
have.
It can make sure that you'redoing that, and sometimes, if
you're on a multi-page documentand you want to go quickly, you
might have overlooked some ofthat work.
(46:01):
Fun is if you take somethinglike most organizations have a
critical event management planor some sort of emergency
procedure, that kind of thing.
You can put that into your chat,your favorite chat system.
All of them will do it this wayand then you can say simulate
one of the emergencies or one ofthe events in from this
document and take me through itstep by step and test me on my
responses, which is to me areally interesting use case that
(46:25):
a lot of time we, um, we pushout documents to people and we
say, read and understand this,and that I don't know any
millennial that will do thatlike they will absolutely not
read those documents at all.
They don't read email at all.
But if you said here's a gameand I'm going to choose your own
adventure and I'm going to testyou on what to do, and it can
(46:48):
take you through a scenario soI've done this with ones where I
say let's do a cybersecurityscenario with our current event
plan and it asks really goodquestions Like there's been a
cyber attack, who do you notify?
And then you do this with yourit staff and they'll be like, uh
, actually we're not too sureabout that.
We would call the manager.
And well, the manager's on, uh,manager's sick today.
(47:10):
What?
Who are you going to call then?
And you start to uncover these,these gaps that people have
that they didn't even know thatthey had because they just
flipped through the document anddidn't really read it, and so I
think it'd be reallyinteresting to see how
organizations adapt to moreinteractive learning rather than
just pushing documents outgreat.
Speaker 1 (47:29):
A great example I can
think of recently on that
subject is um, this companycalled arctic wolf, who are, you
know, cyber security business,and um, so they're known for
their cheesy videos, right.
So you have to watch thesecheesy videos and it basically
stealthily teaches you.
So apparently they got a bit offeedback on this a couple,
about two or three years ago.
It was a talk.
One of the guys would give inand I said um, do you know what?
(47:51):
We're going to double down onthose cheesy videos.
We're going to make them evenmore cheesier.
Those wall-e jumpers went away.
They're going to be evenwallier because people remember
them and people actually watchthem.
So they're actually watching,like and learning about, you
know the next installment of,okay, what's a fishing, you know
what's a fishing attempt orwhatever, and how to spot you
know rogue, uh, email addresses,etc.
(48:12):
And because they want to watchthe cheese video.
So I kind of feel that's onthat, on that, on that uh level,
right, you know it's, peopleare learning and absorbing
information in different ways.
Yeah, I think, feel that's onthat level, right, you know it's
, people are learning andabsorbing information in
different ways.
Speaker 2 (48:22):
Yeah, I think that
that's a really undersold
ability for LLMs is that theircapacity to tailor information
for each person.
So there's some people usingthis, maybe in less than ethical
ways, in marketing.
So you're seeing that the emailblast outs that go out to
(48:42):
everybody.
Those are being written by yourpersona or by whatever category
they've put you in, but you canalso see this Still rubbish.
You can see for people that Ihave neurodivergent people.
You can see for people that Ihave, um, neurodivergent people.
So I've got a really goodfriend who they really struggled
(49:03):
to read big, long emails, um,and every executive on the
planet loves to write big, longemails explaining everything why
they're making the decisionthat they're making, and they
would really struggle, uh, whenthese emails would come out and
I would help explain to themwhat's going on.
You can see how they could useand I know they do now use LLMs
(49:25):
to be their translation betweeninformation going in and going
out.
So in this case, this personhas the words they choose.
When you and I read it, we mighttake offense to it.
They don't mean any offense,but it's just the words that
that come out of them, and sothey've adapted their work style
to be like well, most of thetime I'm going to put it through
(49:47):
this, you know, sanity checker,because I don't want to offend
someone inadvertently, but theyjust they really lack that
ability to do that um themselves, and I think that's also really
interesting use cases to to bethat translator to and from
others if we look out to thefuture maybe not too far out,
maybe next four or five yearswhat's some of the big trends
(50:08):
you think we're likely to see inthe ai space for businesses?
I think that we mentioned before, uh, autonomous agents.
There's a lot of research anddevelopment going into this
space.
So Microsoft is actuallysponsoring an open source
(50:30):
project called Autogen, and thisis sort of like a loop that
loops the output of one agent orone LLM into another, and so
you can have a network of theseagents working together towards
a common goal, and you candescribe this as a manager agent
.
You've got a copywriter agent,you've got a UX agent and a
(50:54):
developer agent and you can saymake me a game, and the manager
will then delegate tasks tothose agents.
I think you'll start to seethings like that become much
more prevalent in the future,where you can give a system a
really high level goal and saydo this thing for me.
The issue that they're havingright now is that those agents,
(51:18):
when they do hallucinate, ittends to cause a cascade of
hallucination, and so you've gotto throw out everything that's
been done.
But as soon as they fix thatproblem, you can see this
happening much more.
And then you start extendingthat to agent-to-agent
communication, so negotiation,so a real example one would be
(51:39):
you know buying office equipment, so I need paper, I need, you
know, whatever from Officeworks.
Why don't you have an agent inyour business that then
negotiates with the Officeworksagent a price and then organizes
transport of that material backto your office, and you could
then extend that example, youknow, to many different places.
So I think the autonomy willstart to happen and that's
(52:02):
certainly in line with whatOpenAI and Anthropic are both
chasing.
They're both chasing what theycall artificial general
intelligence or AGI, and that'spredominantly those use cases
where you're able to give a realhigh-level instruction and get
an outcome out of it.
I think another one that will beinteresting is the things for
(52:30):
AI to do.
I won't say workplacesurveillance, but it is sort of
workplace surveillance, but Ithink it will be in a benevolent
way.
So you have things like I thinkthey tried this out a long time
ago with, maybe with Delve orwith a vision of Delve was
(52:51):
mapping the social network in anorganization and the strength
of connections between peoplethrough email and through chat.
I think you'll start to seemore of that where I believe
even microsoft viva, the toptier of that model, some of
these things, um, but I thinkit's just for your information
(53:12):
only.
It doesn't actually take action.
But you could imagine an ai,especially if people start to
personify their co-pilots, thatAI being more active and saying
here's a person that youwouldn't meet, wouldn't have met
in your normal course of doingwork, but we think you would
(53:33):
like and would be interestingand beneficial for you, and so I
think that there's probablymaybe something a bit big
brotherish or a little bitdystopian about it and we
probably need to walk that fineline.
But for organizations,increasingly, the degree that
they can create connectionsacross silos and across
(53:53):
boundaries, um the network, themore networked an organization
can be, the more competitive itwill be, and I think that these
tools will will be a way toachieve that.
So, um, kind of you know not,not sure how that was going to
play out, um and how people willtake it, but you know, you
again, we've been kind ofcomplaining about millennials,
(54:15):
but if you talk to them aboutprivacy, they've got a very
different idea about that, um,and where something that you and
I may may resist, um, may notbe an issue for them.
Speaker 1 (54:25):
Yeah, on a personal
note, as we kind of come towards
the end of our our interviewwhat's the future holding for
you, chris?
Over the next 12 months, youknow what's going to be keeping
you busy uh.
Speaker 2 (54:37):
So yeah, like I said
um earlier, I'm spending a lot
of time using AI in the strategyspace and to that end, I've
been working on a I hope to be aSaaS platform, so a software as
a service platform to helporganizations do strategic
planning and strategic analysisusing AI.
(54:58):
I think there's a reallyinteresting use case that's not
really been covered a lot in themarket.
There's a lot of focus onstrategic execution.
But I have a really simplequestion for those tools.
It's like how do you knowyou've chosen the right goal?
There seems to be no onelooking at that.
A lot of them in their diagramsof how do their platforms work.
They've got here's yourstrategy.
(55:18):
It's a PowerPoint.
Put the PowerPoint in our tooland off we go um the work it
takes to create that powerpointthere.
There doesn't seem to be a lotof um tools to support that.
So, um, very early days tryingto understand what do customers
want and what do they need inthis space, and then how could,
(55:39):
how could it be built?
But spending a couple days aweek, uh, putting my dev hat
back on and and doing that um,and then my other passion is
again the, the foresight andfutures work.
I think that, as our world ismore complex and more dynamic,
um, there's there's moredisruption coming every day that
(56:00):
, uh, without a rigorous way tounderstand and deal with those
changes, that, um, people end uptreading water and not really
moving on with with what theywant to achieve, and so I find
that very passionate about thatand if anyone wants to go to
your website, find out moreinformation about you, how do
they do that?
uh, so, on linkedin is probablythe best way so you can search
(56:22):
for my name.
So, chris, and the last name isduri d-u-r-y.
My company is called mountainmoving company, so that's
mountain moving dot co.
You can hop on there.
I've got quite a large articlebased on how to apply futures
thinking large article based onhow to apply futures thinking,
use generative AI, especially insmall and medium enterprises,
and offer profits, and it alsohas links to some of the other
(56:46):
tools that I've built.
Speaker 1 (56:47):
Perfect, chris,
thanks so much for taking the
time to to to have thisinterview and that took us a
couple of goes due to illness onboth fronts.
But, yeah, really appreciatethe time.
And thanks so much for makingthe time to come on to the show.
Speaker 2 (57:02):
Thanks very much, lee
.
It was a real pleasure, cheers.
Speaker 1 (57:08):
So that's another
great episode, done and dusted,
as always.
I'd love to hear from you ifyou know anyone that's got a
really good story to tell abouthow they are or not living a
productive life.
If you want to get in touchwith me, please do so by my
website, wwwliestephensco.
That's wwwliestephensco.
You can email me, lee atleastephensco, or get in touch
(57:31):
on LinkedIn, which is where Ialso hang out.
In the meantime, have a goodweek.