Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:00):
Welcome to the AI
Value Track Podcast.
We're going to be demystifyingAI on the request of our
customers.
We're bringing together theright people, and I am in charge
of dumbing it down.
No jargon.
We're going to explaineverything.
We're going to keep it simple.
So this is podcast one in aseries of three, and I'm
delighted to be joined by IanHogg, CEO at ShopWorks, and Ed
(00:23):
Hogg, CEO at Solve By AI.
Hi Ian.
Hi, Simon.
Edward, nice to meet you.
SPEAKER_02 (00:29):
Hi Simon.
Thank you very much for home forhaving us.
SPEAKER_00 (00:32):
So we're going to
keep it simple for people.
That's that's my keyresponsibility.
So if we start talking jargon,I'll pull you up and get you to
explain.
This is about demystifying andkeeping it real.
So in terms of why we're doingthe podcast, what are listeners
going to be able to get out ofit and what's what's the hope?
SPEAKER_01 (00:50):
Well, I hope you
know we want to share some of
the experiences we've got ofhelping customers implement AI
solutions, really.
And what I find with thecustomers we speak to is they're
they're nervous to start becausethey they you know it's just a
lack of awareness or knowledge.
Um so hopefully we can sharesome of the things we've learned
and some of the mistakes we'vemade and some of the pitfalls
(01:13):
we've come across, but also someof the the good tips on how to
do it right.
SPEAKER_00 (01:16):
Perfect.
And it's your day-to-day, Ed.
What are you looking to helppeople with in terms of this
series?
SPEAKER_02 (01:22):
Yeah, I think Ian
covered it quite well there.
I think one of the key things isto get across the the kind of
steps that you would take, thesteps you need to take before
you start an AI project, whatyou should be doing during, and
then how you make sure thatyou're taking the most advantage
of it and really driving thatROI.
Perfect.
SPEAKER_00 (01:38):
So I'm gonna start
really, really simple.
AI, artificial intelligence.
Ed, do you want to just give usa little synopsis of what that
actually means?
It's everywhere, people aretalking about it.
What what is AI?
SPEAKER_02 (01:52):
So AI is the concept
of a machine learning to be
intelligent about a concept, sotherefore it is artificial
intelligence.
Essentially, it means that acomputer has encoded in itself
or in some in some code how tosolve a problem, and then it is
when you go and ask it to solvethat problem, it is assisting
(02:13):
that.
It used to be very narrow andreally narrow to be or two
seconds worth of tasks, andwe're now out up to about an
hour's worth of human work.
It can be done better by AIrather than by a human.
Perfect.
SPEAKER_00 (02:25):
So let's take that
into the business place, Ian.
Where are the areas thatbusinesses are most open for AI
use?
SPEAKER_01 (02:33):
Well, I think you
know, AI is a tool.
So expanding on what Ed wassaying is really it's they're
just computer tools that sort ofrun themselves and you know make
decisions themselves.
So anywhere where there'sknowledge work, it you know, is
open to productivityimprovements for AI.
So uh when the recent ChatGPTlaunch, which is the sort of
(02:54):
leading you know, softwaremodel.
One of the tools, yeah.
Yeah, probably the leading, mostwell-known model, well-known
tool, they said that coding wastheir, you know, software
development was that one oftheir key objectives to be able
to sort of you knowrevolutionise that.
But people are using it in youknow, legal for contracts, uh,
finance, you know,documentation, you know, making
(03:18):
podcasts, it's it's reallyanywhere where somebody can a
knowledge worker could do, couldcould deliver.
SPEAKER_00 (03:26):
So cast our minds
forward ten years, then we're
all going to be redundant,there's gonna be no work for us
to do because the machines arerunning it all and they're all
speaking to each other.
It is a scenario that people allhave in their mind.
I personally don't think that'strue.
Is that something you see, or isit is it an enabler or is it a
replacer?
SPEAKER_01 (03:44):
I think you know,
ten years is a long time to
forecast.
Five minutes in this world seemsa long time.
We we we we are I encouragepeople if we're doing a sort of
an implementation project tohave as like a three-year
vision.
But you know, so certainly 10years it's it's it's moved so
fast in the last two years thatyeah, it is possible.
But I think right now and in thenext year or two, it's it's an
(04:06):
enabler, it's an enhancer.
We sort of talk about it as likean assist.
You hear the phrase co-pilot.
So, you know, so we we weencourage for our scheduling
tool, we call it like ascheduling assist.
It doesn't do everything, itassists you and makes you more
productive, and then hopefully,you know, at the moment, we it
(04:26):
hopefully it takes away the realdrudgery of work and then lets
people focus on the more valueadded.
So I think for the next year ortwo, that's definitely where
we're at.
Could it get better than that?
You know, probably nobody knows.
Nobody knows.
SPEAKER_00 (04:39):
Probably is the
answer.
And it it I suppose to summarisewhat you've said there, it's
helping helping people makingdecisions, it's not making the
decision.
Is there going to come a pointwhere it will make the decision,
do you think, in certainscenarios?
SPEAKER_01 (04:53):
I I think they call
it human in the loop, you know,
so that but which means that theAI does something, it presents
presents you know someinformation to uh you know a
human that makes the finaldecision.
So and the the AIs are fallible,yeah.
I suppose just like we all are,but you know, that you hear
these there's a phrase calledhallucinations, but you know,
you have to check the output.
(05:15):
And so, you know, when how longbefore the I you know you might
have another AI checking theoutput of the first AI, that
that's you know, people arealready starting to experiment
with that stuff.
But it's it's fallible, itremains fallible, so you have to
have humans check check stuffand input into it.
SPEAKER_00 (05:34):
So anybody watching
this, listening to this, who
sits around a boardroom, it willbe on their agenda AI, whether
it's a question mark, whetherthey're doing something with it,
whether they think they shouldbe doing something with it and
and aren't everybody's talkingabout it, you can't escape it.
It's on the tele, it's on yourphone, it's it's everywhere,
right?
So in your world, Ed, where domost organizations get stuck?
(05:57):
Because there's two things atplay, right?
There's tech, and we've talkedabout tools like Chat GPT,
there's Gemini, there's Grok,there's all there's all these
tools there that you can typesomething into and ask it to do
something, a pretty picture, acode in an app or whatever.
So is it is it the tech thatpeople get stuck on on what to
do, or is it the change thatpeople get stuck on of?
(06:19):
So now I've done something withit, what does that drive?
Uh it's definitely change.
SPEAKER_02 (06:24):
It's a culture
thing.
It's if you look at some of themost recent surveys that came
out only only last weekend, 98%of people want to be involved in
the decision at the place thatthey work when it comes to
implementing AI.
So there is an organizationalwant to be involved, but we
aren't deploying it in the rightway, and that's where where
products are projects aregetting stuck.
(06:46):
You've also got the, from acultural point of view, kind of
data readiness.
So, according to someorganizations like Hubble,
you're only about 8.6, 10% ofbusinesses out there are ready
for AI in terms of their dataset, and therefore being data
prepared, spending a lot of timemaking sure your data is in a
usable format, somewhere whereit can uh be easily accessed,
(07:09):
where you can train it in asecure way, those are the things
that people are getting stuckon.
SPEAKER_00 (07:13):
Training data then.
So again, for the uneducatedlike me, what does training data
mean?
SPEAKER_02 (07:18):
Yeah, so AI is learn
based off of historical
patterns, and so what you needto do is have your data ready so
that you can put it into an AI.
AI is thinking ones and zeros,and then like computers do,
they've just learned thepatterns for ones and zeros very
well.
And so a lot of people don'thave their data in a place or a
way where it's very wellstructured, they haven't labeled
(07:40):
things correctly in data sets.
So yeah, you've got everythingbeing labelled as pints rather
than perhaps identifying them asthe type of pint they are, if
you want to do that kind ofstuff, zero alcohol versus uh
alcohol, those kind of things.
So those those kind of data settraining problems really cause
that.
So there's a culture abouthaving your data prepared, but
there's also allowing managersto be in the right place to to
(08:02):
make those kind of changes.
SPEAKER_01 (08:04):
So sorry, Ian.
Yeah, I was gonna I was gonnajust uh build on that one,
Simon, because I I think thatyou know one, I think it's like
the biggest change managementproject of all time, you know,
like every business is is is uhwhether they like it or not, is
gonna be using AI.
SPEAKER_00 (08:19):
And and some
probably are without knowing.
Yes.
SPEAKER_01 (08:21):
It is the reality.
Yeah, and if if you know yourstaff are using it at home, even
if you you haven't given it tothem officially.
Yeah.
So and we what we find, so we'vedone quite a lot of
implementations of forecastingand then workforce management
scheduling.
And those when they fail, theyfail for cultural reasons and
(08:42):
poor, you know, poor, poor, notpoor management, but you know,
like lack of setting the youknow, setting the expectation
with the team, bringing the teamalong with them.
And so I think it's definitely aculture, it's a culture
challenge more than it's a tech.
We haven't had any failuresbecause of tech.
We've had failures because oryou know, delayed projects and
(09:04):
then having to restart and goaround again and try it.
We've you know, it comes down toas sensitive as rebranding what
you're calling the project.
And like we were talking earlierabout using the word assist
rather than like auto-schedulesort of implies that the AI is
gonna do everything, and assist,schedule assist implies that
it's gonna help you, you know,it's gonna help you.
(09:24):
And those cultural differencesmatter, you know, and and one
says you as the store manageraren't involved, and the other
one says you are involved, youknow.
So so yeah, it's cultural,definitely.
SPEAKER_00 (09:33):
So senior leaders
then, how do they bring their
teams with them?
So in an organization, you'lltypically have the the senior
team that buy something, decideto go that way.
That then trickles down, doesn'tit, through the hierarchy, down
to store, restaurant level,whatever the organisation is.
So what what things should theybe thinking about?
You've mentioned a few there,but are there any other other
(09:53):
points you've got that theyshould be thinking about?
SPEAKER_01 (09:55):
I think you know
there's there's been quite a few
studies where you know if seniormanagement aren't living and
breathing it and aren'tinvolved, you know, like leading
from the fund, setting anexample, then it it's likely to
uh to fail.
But I think probably the theprobably the key is setting
objectives at the start, youknow, making sure you know
making sure that the you knowthe making sure that the the
(10:21):
whole team are aware of thoseobjectives, you know, and
they're aware of their role init and they're aware of why
you're doing it.
Because I think there were therewas another survey, I think it
was the same survey Ed wasreferring to, where most
employees, employees, more thana third, have zero trust in the
company's AI plans, you know,and and two-thirds just don't
trust the company at all.
SPEAKER_00 (10:41):
Do they know it, I
suppose, before you trust it?
SPEAKER_01 (10:44):
And be if they're
not if you leave a vacuum, like
you're you're not sharing withyour employees what it is you're
trying to achieve and why, thenthe that vacuum might get filled
with negative, yeah negativeresponses or suspicion or lack
of trust.
So you you need to fill thatvery early, and it's a it's a
key part of many projects.
SPEAKER_00 (11:05):
And most colleagues
in an organization will have
Gemini, Groc, Chat GPT, allthree on their phone.
So again, you could be insituations where your colleagues
are more armed with AI toolsthan actually the business
itself.
SPEAKER_01 (11:22):
Yeah.
Well, I there's a there was astudy done, so 90%, you know,
the of the people in this study,90% were using AI at home, you
know, or personally, and only40% of the company, 40% of the
radically supplied by thecompany.
And you know, so that's anotherthing.
You've got managers where uhyeah, I've I've I've heard it's
(11:43):
you know, in fact, we this waswe were guilty of this until we
bought the licenses.
So up until you buy the licensesand train people and you know
give them some boundaries onwhat they what data they can and
put into an AI, they are likelyto be just doing it from home,
yeah, uh, on on the on theselves and putting your data or
the company's data into uh intoa model where they don't know
(12:05):
where where that data's gonnaend up.
SPEAKER_00 (12:07):
I was gonna that led
me to the next question, really,
of where where does that datago?
Is it the same in all thedifferent models?
Is it different by model?
Are I giving my data to trainsomething else?
I don't know.
SPEAKER_02 (12:17):
Yeah, it depends.
There has been some some prettypublic leaks very recently and
for some of the large languagemodels.
I guess you have to just be doyour research on it.
So there are there areenterprise editions that allow
you to restrict where that datagoes, where where you can take a
copy of the code for want of abetter word, and tr only run
(12:37):
your model on that without itseeing the rest of the internet,
without it having to do that.
But at the same time, you aren'tgetting the full effect because
the AIs are learning all thetime, so you want to be able to
expose it.
So do your research as to whichwhich one to use.
There are more and moreenterprise editions which have
high quality security on themand make sure you're doing that.
The other thing, just to just toadd to the the point that you
(12:59):
guys were just talking about, Ithink measuring adherence to the
AI, as in the AIs make somereally good recommendations,
they make some decisions, butuntil you, as a leader, measure
how well your team are adheringto what the AI recommends,
you're not going to see thatthat true ROI from what you're
getting.
So I think measuring of AIadherence, whatever the subject
(13:21):
or decision is, is quite useful,useful metrics as for a leader
to ensure that they bring theirteam along.
SPEAKER_00 (13:26):
Yeah, so it kind of
leads us nicely on to the my
kind of next point.
The internet's full of catsdancing on the moon in a
skateboard with a bloke behindon a hoverboard knitting, you
know, I'll make it up, but it'llbe that'll be there somewhere.
And and that's all neat and funand nice and and all that kind
of stuff.
But doesn't really work in inbusiness.
(13:47):
So how do we pick the firstthree user cases from a business
point of view we want to do andthen define what success looks
like?
SPEAKER_02 (13:56):
I think you have to
align with KPIs first.
And if you are a business thatis focusing on, let's take you
being a retail business for asecond, if you're a retail
business and you're focusing onincreasing sales, you can't go
and deploy an AI that reducesyour staff hours or is focused
on reducing your staffallocation in a in a store in a
(14:18):
in a location because theyaren't aligned.
And that will seep through.
You won't have success in theprojects because you you haven't
aligned with what your yourcultural goals are.
So that's that's the firstthing.
Yeah.
The second is that you reallywant to focus on what you have
the data for.
So coming back to do I have thedata, is it in a format where it
can be trained now?
How lot much effort do I need toget there?
(14:40):
And then the third one is ismore focusing on your I your
ideal workforce group.
So I think what what are youlooking for your workforce to
do?
What are the bits of their jobsthat they enjoy?
And what are the bits of theirjobs that are just mundane, uh
that they really aren't learninganything and they don't want to,
they don't want to do anymorethat you can help replace
(15:01):
because that really does lead toquite a high quite a high
adoption rate on that kind ofstuff.
Keep them narrow, make sure thatthey're very focused projects
and and make sure your firstthree use cases are focused, are
going to have a good goodreturn, you've got the data for
them, and that you're able toalign them to your KPIs.
SPEAKER_01 (15:19):
I I just adding on
building on that, so I think
what Ed's referring to there isthat there are some big projects
where, you know, uh, you know, Idon't know, optimise stock, you
know, for and use AI toincrease, you know, reduce the
amount of stock in a in awarehouse.
Those sort of projects, they'repretty, you know, they're
they're big, they're easy to setsome measurements against, you
(15:40):
know, there's and that theyoften those are the sort of
projects that often get a goodROI.
Yeah.
At the other end of thespectrum, there's also a bit of
experimentation required.
So for instance, we we you know,as we're trying to get adoption
in our uh our company, likethere are you know, we're giving
people chat GPT, we don't yetknow all the use cases, they'll
(16:04):
come up with them.
You know, they're more creativeand innovative than I am.
So actually, if you give peoplea 20 bucks per month ChatGPT
license and say go play with it,and these are the rules you can
and can't do, they're gonna comeup with examples that are that
are, you know, some of themmight be great and some of them
actually don't save any time atall.
(16:24):
So I think and they're you know,and they they don't those
individual tasks and littleprojects that individuals do
with those tools, it's reallyhard to you're not gonna have a
you know you're not gonna setKPIs in advance or or you're
just gonna let them go andexperiment.
And for those, I think it's bestto have a like a shared look,
look what I've done that's cool,and I've saved half an hour a
(16:45):
day, you know.
So I think there's sort of twoends of the spectrum, these big
set piece projects that Ed wasreferring to, where you know the
you know the management aretrying to get one percent off of
some sort of big metric, andthen there's you know, share it,
share the share the tools aroundand and share the best use
cases, and then hopefullysomething positive will come out
of it.
SPEAKER_00 (17:06):
So kind of play and
learn type.
Yeah, and just experimentbetween the teams.
And the there's a phrase shadowAI, which again you'll have to
explain to me that we should beavoiding, apparently.
SPEAKER_01 (17:17):
Yeah, I think that's
what but what I was hinting at
earlier when I was saying thatif you don't supply them the
tools that they will, you know,people will use them themselves.
So and and funnily enough, a lotof you know, a simple project,
this is how we did it internallyin Shopworks, is we did a
survey, you know, what tools areyou using yourself?
(17:38):
Uh is the company paying for it?
Because you find you know thesetools are they're not massively
expensive, they're£20 here, andyou know, and if you've got a
software engineer on you know£50,£60,070,000, then a£20
addition to make them a fewpercent more productive is a is
not a huge investment.
So what you find is that you geta proliferation of these tools,
(17:59):
but quite often the the shadowAI a bit is that they're they're
implementing them themselves.
They've they've got their ownlicense or they're using the
free version.
And the way to the you know,it's it's not the most complex
project to run.
You do a survey, find out whatpeople are using, find out what
they need, do a little riskassessment on on what tools you
(18:21):
do, and then then you know pickyour official tools, buy them
for people, and then train them.
And I think the the key bitthere, the final bit is
training.
It's I think you know you know,cut quite you hear stories of
people sign up for a load ofChatGPT licenses, nobody's using
them because nobody's beentrained or encouraged to use
(18:42):
them.
So just to summarize thesummary, the shadow element is
people using their own toolsoutside of the sort of control
or guide guidelines of thecompany.
But potentially using companydata in it.
Yeah.
And there there are risks tothat, considerable risks.
SPEAKER_00 (18:58):
And in terms of
before rollout ed, are there
typical point stages that chatlist that people should think
about before they startadopting, building on Ian's
point?
SPEAKER_02 (19:08):
Yeah, not to be
boring, but data again.
Uh data is is one of the one ofthe most important.
Aside from that, you you need tohave an owner, you need to have
a somebody who's owning theproject.
It comes, it's classic projectmanagement, really.
You need to have an owner, youneed to have clear
communications internally as towhat the success criteria are
for rollout, also what's goingto be happening within it, build
(19:31):
trust within within the uhwithin the organization.
And then you need to have one ofthe quick checklists I have is I
make sure that we have a plannedfeedback point.
So these AIs can learn at such arapid rate.
You have to take advantage ofthat.
You have to, if you'reimplementing in a in a factory,
go and ask the people on theshop floor if they're using a
tool, is it successful?
If you're working in a on acruise ship, you need to go and
(19:54):
ask the passengers whetherthey're liking what whatever
you're doing.
You need to go and ask thesepeople as soon as you possibly
can to try and get the learninginto it so that it can learn
quicker, so you've got betteradoption.
So I think those those are thethings that I would focus on.
SPEAKER_00 (20:08):
Any any tips, Ian,
or Ed again jointly, for
avoiding avoiding failure on topof those?
SPEAKER_01 (20:14):
So I think select
the right project.
I know that sounds obvious, butit if you know, you almost want
to go for the low-hanging fruit.
So if I was, you know, if I wascoming into a company and
somebody said, right, we want tostart adopting AI, I you know,
like I said, there's those twoends of the spectrum.
If we were on the the sort ofbespoke end where it's a it's a
big set piece project, I I'd belooking for the for to try and
(20:37):
get what what they call a haloeffect project.
I get a big win.
So pick the project that's mostlikely to deliver that, get
everybody behind it, get thehigh priority, and then you the
organization will learn fromthat project, but also it'll be
inspired by that project.
Because if people are runningaround saying I've you know
saved two percent of my stockcosts or you know increased
(20:59):
sales by one and a half percent,then funnily enough, other
people are getting enthusiasticabout it.
SPEAKER_02 (21:06):
Yeah, I think for me
realise that it's assisted.
Yeah, it's it's about it's anassistance tool, it is not
replacing a workforce, it'syeah, use it, use it to assist
your workforce, don't use it toreplace your workforce.
So, yeah, enhance, not replace.
I think I think is the phrase.
And I think that as long as youcan you do that, you're gonna
(21:27):
bring along your staff, you'regonna drive uh drive an ROI
within your business, and you'regonna introduce people and use
that that halo effect, as Ianwas talking about, to really
increase the the adoption of AIwithin your organization.
SPEAKER_01 (21:39):
Well, one of the one
of the phrases that get quite
gets talked about a lot in thespace is whether you're using AI
for efficiency or productivity.
So really what they're whatthat's that's saying is if
you're using it for efficiency,i.e.
there are managers going, right,we can put the AI in and we can
cut staff costs, you know.
And it's no wonder you knowstaff don't trust the motives.
(22:02):
Yeah, the machines are coming.
And then the other one is isactually fine, we've if we take
the same team we've got now andgive them all the right tools,
how much more can we do?
So how much better, you know,can we sell more?
Can we build more product?
Can we can we you know delivermore podcasts?
Whatever it is that your outputis, if you use AI, uh, you know,
if you if you make all yourwhole team 20% more productive,
(22:25):
could you get 20% more growthfrom this relatively the same
cost?
Or do you want to take 20% ofthe cost out by putting AI in?
And as you'd expect, there aremanagers that take both, and
that that is a that is a sort offundamental debate that goes on,
and as you'd expect, differentcompanies will take a different
approach.
SPEAKER_00 (22:45):
Yeah, and some might
want to hybrid a mix a mix of
both, but current tradingenvironments are tough, you
know.
Everybody, whatever you're in,retail, hospitality,
manufacturing, consultancy,whatever, everybody's cost base
is going up.
So saving your way out of italways feels like a relatively
blunt tool long term, short termmight be, but actually that
freeing up time to driveservice, better conversations.
(23:08):
If you are in a physicalenvironment, drive average
transaction value to yourcustomers because getting new
ones is tricky, so selling moreto the same is is probably the
sweet spot you'd want to see itused in, I guess.
SPEAKER_01 (23:21):
Or give them a
better service so they don't you
don't lose them.
You know, retention is uh goodthing.
Absolutely.
The the the CEO of Shopify did aquite a famous memo to his staff
and um I say famous, famousamongst AI geeks.
So he did he did a memo andsaid, you know, before you
create any new hires, considerwhether that job could be done
(23:45):
via an AI.
And one of the things that is isprobably happening is is people
are hiring, you know, I thinkit's already having an impact on
hiring, particularly at the morejunior levels.
There's plenty of data out thereto support that.
So I think where people aregetting the the cost saving is
they are they're not necessarilyletting existing people go,
(24:05):
they're just not adding to theyou know, they keep it they're
putting they're using it to putheadcount freezes in whilst
generating growth.
Yeah.
So yeah, that would be a hybridversion of it, you know.
But the the objective is stillto generate the growth.
And of course, it ought to bemore profitable growth if you've
managed to not have to add toomuch more staffing to generate
that growth.
(24:26):
Perfect.
SPEAKER_00 (24:27):
Ed, Ian, thank you
very much for joining us on
episode one.
I think my key takeaway isyou've got to be thinking about
it, you've got to be starting,but the underlying data is just
paramount.
So making sure you've got thedata in the right format, and
again, it speaks through even ifyou're not on the journey at the
moment, you've got to bepreparing that data because at
(24:48):
some point you're you're goingto be on the journey.
But it sounds like there's awealth of opportunity and
productivity efficient gains tobe had.
Some are there, some are maybe abit behind, some are starting to
get ahead.
But thanks for your insights,and we'll speak to you on future
episodes.
Thank you, Sonan.