Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_02 (00:01):
Welcome to the AI
Value Track.
This is podcast two in a seriesof three.
I am delighted to be joined by acolleague of mine, James Boll,
head of Data and Insights atRethink Productivity.
Hi, James.
Hello, Simon.
Thank you for having me.
And returning from episode one,Ian Hogg.
Thanks, Simon.
So we're going to get into thenuts and bolts today.
We're going to talk about thosetools maybe that individuals use
(00:24):
in their day-to-day lives inbusiness to make them more
productive.
You know, we talked about thebenefits in episode one, freeing
up time, all that kind of stuff.
So we'll jump straight in then,Ian.
Talk us through some of thetools that are available, you
know, for individuals atcorporate level to buy that they
might be using now or they mightwant to start to think about
using.
SPEAKER_00 (00:45):
Fine.
So there's, you know, so theprobably the first place to
start is is the sort of verypersonal tools like ChatGBT and
other they're called largelanguage models, but they're
basically LLMs.
LLMs.
That's the way.
SPEAKER_02 (00:58):
I'll I'll include
you you spell it out and I'll
introduce them.
SPEAKER_00 (01:00):
Yeah, so they're
called LLMs and then the and
they are they're they're a sortof you you like a chatbot where
you talk to it and you type intoit or talk to it verbally and it
gives you an answer.
Okay.
SPEAKER_02 (01:14):
And they're they're
all sophisticated, aren't they?
With is it a quick check?
Is it I think one of them's gota deep reasoning where they go
off, one of them will write youa deep research report.
I was using one the other day,maybe Gemini, and it said, Do
you want me to build you aninfographic of it at the end?
And it was like, yeah, if youreally want to, it'll it look
great.
SPEAKER_00 (01:30):
But exactly.
And they're the sort of thingswhere you give it an individual,
you you write it as what'scalled a prompt, and you're
giving it a task, the the model,and it comes back with an
answer.
And that answer could be aPowerPoint presentation or it
could build you a businessspreadsheet.
SPEAKER_02 (01:44):
Again, my naivety,
but I'll say it.
You can't treat it like Google,right?
So you can't well you can, butyou'll you'll limit use.
You can't say, find me the bestlaptop for business use under a
thousand pounds, because thatthat's really what you use
Google for, right?
SPEAKER_00 (01:57):
Well, here's the
difference is if you asked a a
good LLM, what it'd say, well,tell me a bit more about what
you want, it would come back andask you some questions.
Trying to refine the search orit would say, What do you want
the laptop for?
You know, what what's yourbudget?
You know, that it would comeback and ask this question.
So that's like the first sort ofarea of these personal tools.
(02:17):
The second one is what what Iwhat I would call sort of
embedded AI in existing tools.
So if you think about it, mostcompanies have got tools that
everybody uses.
SPEAKER_02 (02:26):
So Gmail, yeah, so
yeah, email, iPhone, iPhones
have got the iPhone, whatever itis now, 16's got their AI in,
hasn't it?
SPEAKER_00 (02:34):
Yeah.
Yeah, but what I'm saying isthere's software tools that the
whole company uses.
So it could be your email suite,it could be your your video
conferencing suite, it could besomething like Jira, which is
like a ticketing system forproblems, or project management
software, or CRM.
So all of those sort of tools.
(02:54):
So what I would call embedded AIin that is the sort of thing
where you make available, forinstance, note takers within
your you know, if we're doing avideo conference, you you
everything always wants to takenotes.
SPEAKER_02 (03:06):
Everything
summarizes it and then send me
an email with it afterwards.
SPEAKER_00 (03:10):
Exactly.
But you know, the that that sortof stuff.
You might have you might have aif you're writing a ticket, like
a problem ticket for to send toa help desk, it might prompt you
with a format and then come backto you and say, You haven't
answered this question.
So to that, I would I would callthat embedded AI into existing
(03:31):
systems, and those can can helpproductivity.
We talked in the first episodeabout the like scheduling, you
know, and scheduling assist.
That's a good example whereyou've got a workforce
management platform as anexample, and then you've got a
tool that helps you do the evenin when I'm writing a PowerPoint
now, the little pane will comeup and say, Do you want a
different design?
SPEAKER_02 (03:52):
or Do you want me
to, you know, it almost
auto-corrects or restructuresthe sentences at times now
rather than just chat myappalling spelling.
SPEAKER_00 (03:59):
Yeah, yeah.
And and those are and some ofthose have got the LLM embedded
into the system.
SPEAKER_02 (04:05):
And then the next,
just to stop on that point, so
we're probably using more AIfeatures than we actually know.
Yes.
Because it might not be calledAI, there's you know, Microsoft
used Copilot as a word, butthere's the load more stuff out
there than we probably takecredit for we're using.
SPEAKER_00 (04:22):
I and I I think now
almost every software you're
using is gonna have some sort ofAI.
If you think about it, if youwere doing if you were doing a
tender for a new software toolfor your company, you're gonna
ask the question of the sort ofthe vendor, you know, what's
your AI roadmap?
What AI do you have now?
Is there a chatbot?
You know, do I have to actuallyread the help files or can I
(04:45):
just create just question it andsay, hey software, can you tell
me how to how to set this up inin your software?
So AI ought to be embedded inthose sort of tools.
And then then the sort of thirdcategory.
So the first one was the thesort of LLMs, the large language
modules.
The second one's embedded AI,and the third one are what I'd
call AI native tools.
(05:07):
Okay, so they tend to be, theseare being sold now as sort of
almost like with a they've got ajob title with them.
So for instance, like legal AI,you know, so it's like your
lawyer in-house will do all yourcontracts.
It is AI sales development rep.
SPEAKER_02 (05:25):
So it's got a
specific job title almost that
then leads to the task.
SPEAKER_00 (05:30):
Yeah.
So so a good one, salesdevelopment rep.
So you know, in in software anda lot of other businesses,
that's the person we would havecalled when when I was younger,
they would have called thattele-sales.
Yeah.
But now it's all on LinkedIn andmessaging.
And yeah, there is somebody's orseveral people who built an AI
that just manages that sort ofprocess.
So those those aren't just sortof layering on a or embedding a
(05:54):
bit of AI into an existingplatform, those are saying,
right, from you know, from aclean sheet of paper, how do I
give somebody a tool that iseffective is is almost
equivalent to replacing afull-time person.
And it and again, we talkedabout it in the the first the
first podcast on the series isthat's the sort of tool that is
(06:15):
replacing entry-level jobs.
So a senior salesperson gets hisAI SDR.
Not so long ago, a seniorsalesperson would have had a
real SDR.
You know, you've still got thesenior salesperson, but a lot of
the legwork on LinkedIn andemail notifications done by an
AI.
SPEAKER_01 (06:33):
What's SDR, sorry?
SPEAKER_00 (06:34):
Sales development
rep.
Telesales.
Yeah.
SPEAKER_01 (06:37):
In the old world.
In the old world.
And no telephones.
But it's like I I think I mighthave used this example before,
our productivity forum.
Like people talk about robots.
As soon as the robot is usefulin doing a specific job, it's
not called a robot anymore.
You have several robots in yourhouse, they do your dishes for
you, they clean your clothes,they're called dishwashers and
washing machines.
AI is exactly the same.
(06:59):
People talk about this big ideaof AI, but as soon as it's
actually doing something useful,it just disappears into the
tools and disappears into ourday-to-day jobs, and they don't
talk about it being AI anymore.
SPEAKER_02 (07:08):
Yeah, and again,
people watching, listening, I'd
challenge them to think aboutall those product suites we just
talked about.
As a business rethink, we useGoogle, Gmail, and we get Gemini
in there, but it's also nowappearing, it will give you
three suggested responses to anemail.
Yeah.
You have to be really carefulbecause there might not be your
tone or language, and sometimesyou might be saying that's a
(07:31):
great idea when the email mightnot be such a great email, and
vice versa.
So as ever with this.
And sometimes it takes longer toread the three responses, and it
does just uh lie.
There is that.
And it and there's you know someinteresting stuff of it, it'll
rewrite it, it'll soften it,it'll make it more formal, it'll
so there's some really coolstuff that has a real impact on
people and probably gets you outof some situations or helps you
(07:53):
avoid them.
But my point is whenever Ipersonally use those, I don't
think it probably doesn'tregister with me.
I'm using AI today.
It's just the way it and allthose features come and new ones
come, and on Google Meets nowyou can have different
backgrounds and you can putstupid faces on, and yeah, it
takes notes for you and itemails them to you, and it gives
(08:14):
you a summary of it, and itgives the action points with
whose it's been assigned to.
And it all it is is a pop-upthat comes up and says now you
can do this and you go with it,or you don't.
It it's that ingrained in whatwe do.
I suppose the point I'm makingis it's evolving quickly, but I
never sit back and think I'veused a lot of AI today.
It's just the way it is.
SPEAKER_01 (08:35):
No, and it's just
it's just tools that make you
more productive.
So I mean people wouldn't usethem if they didn't make their
jobs easier, and they have namesand they don't think I'm using
AI to do this, they just I'musing this feature in this tool,
and it's making things better.
And uh people using Chat GPT tohelp them write tend to write
(08:56):
more quickly, but also higherquality.
So it increases increases theirproductivity.
People in knowledge economy jobscan use large language models to
make their writing moreeffective.
So there is evidence.
SPEAKER_02 (09:06):
Not just critical.
SPEAKER_01 (09:08):
I mean, there's
there's studies that that show
that it show that it is.
People can code more effectivelyor more elegantly using using a
large language model than justdoing it from scratch.
SPEAKER_02 (09:18):
My ke my counter
would be, you know, if I'm
playing devil's advocate, are wenot just getting lazy then?
Well, you I feel lazy sometimeswith a canned response.
It feels a it feels it's in it'sthe right answer, but all I've
done is press a button.
SPEAKER_00 (09:29):
I listened to a
podcast with Stephen Bartlett
the other day, and it and it andthey had a a guest on that was
saying that by using all theselarge language models we were
all going to get dementiabecause we were so lazy that we
actually weren't thinking forourselves.
So possibly I'm I'm no expert onthat one.
SPEAKER_01 (09:44):
So I'm possibly but
you've you've got, you know, I I
read a case study where T-Mobilehad used some AI to, while an
agent was on the phone to acustomer, the AI would read
through that customer's servicehistory and how many times
they'd called and what they'dcalled about and would prompt
the agent with questions to askthe customer.
And guess what?
Customer satisfaction went up,job satisfaction went up because
(10:07):
they were having nicerconversations and sales went up.
And so, you know, it's awin-win-win.
If you're using the tools, Ithink you talked Ian in episode
one about hu be having humans inthe loop and using AI to enhance
people, you can do this not tomake people lazy, but actually
to make them better and enjoytheir work more.
And that's that's what you'relooking for.
And I think, you know, thereprobably are people that use it
(10:29):
in a lazy way, like the lawyerthat put all their case details
into ChatGPT and in court citeda case that ChatGPT had made up
and was called upon it.
And that's the danger, right?
And that's the danger.
SPEAKER_02 (10:40):
Exams, kids doing uh
coursework on them, that there's
a whole while ever there's apositive side to this, there's
always dark sides wrong with it.
There's there's other wayspeople will use it.
SPEAKER_01 (10:50):
Yeah, but if you
engage your if you engage your
brain, if in your organizationyou're giving people the tools
and showing them how to use themproperly and how to think about
them, then I think there's lotsof evidence that productivity is
increased.
And also, I mean what'sinteresting is there's a
Deloitte study looking at whenyou talk to leaders about how AI
has changed their organization.
I think more of them still talkabout it's saving us money than
(11:12):
it's giving us new ideas andmaking the quality better.
But it's it's proven in tests onindividuals that if you are
using the tools in the rightway, then it can make you more
productive.
SPEAKER_00 (11:22):
I I also think it's
to the going back to the
laziness point, Simon, is reallya lot of the the tasks that it's
taking off of people are theboring, mundane stuff they don't
want to be doing.
It's like you know, looking fora bug.
So if you take some softwaredevelopers, it's really good at
like this doesn't work.
I've been yeah, I've beenplaying around this, I can't
(11:43):
find, I can't crack the thepuzzle.
It looks at you and says, Yeah,you've got a comma missing
there.
You know, it's like yeah, andwood for the trees, that kind of
stuff.
So and then you know, we've uhwe've recently trained in our uh
operations team at Shopworks howto use some agents, agents.
And one of the guys sort of cameup with a use case where he had
the most mundane horrible job uhon the back-end system that he
(12:08):
managed to get the agent tobuild, and he was just
delighted, you know.
It's like anymore.
One of those dread doing thatleave it till leave it till
Friday job.
Yeah, the satisfaction ofgetting that that solved was was
great for him, you know.
So and I was gonna ask how do weget AI usage up?
SPEAKER_02 (12:24):
I kind of feel like
we're using it after the start
of this podcast, we're using itmore than we think.
But if you still don't thinkyou're using enough of it, what
kind of things can people bedoing?
We talked in episode one aboutpaid subscriptions, so making
sure people've got the right theright tools and it's in that
business context environment,they're not you know, maybe
using their free version ontheir phone.
(12:45):
But are there other things, Ian,that people could be doing?
SPEAKER_00 (12:48):
So I I can quote an
example we did too.
One of the projects we're doingat ShopWorks is as well as
trying to get our the softwarewe sell to be much more AI
enabled to use the category wetalked about earlier, is I'm
trying to get our team to use AIas much as possible so they can
be as efficient and we candeliver projects quicker and it
(13:08):
better for the customer.
Okay, and one of the things wedid is we got the operations
team, so they're the people thatsort of help implement new
software.
So there's project management,managed service of software,
configuration, that sort ofstuff.
So we bought them all a chat GPTlicense, and the first thing we
did was get on a Zoom call withit, and we you we asked some
(13:31):
questions about what people knewand what they didn't, and a lot
of people had some knowledge,but they they'd only had the
knowledge one you know, thesesoftware tools, if you if you
use them, they they exp they addnew features every week.
SPEAKER_02 (13:44):
So if you every day
at the moment, exactly if you
haven't been on it, it bumps up,goes you can do this, exactly.
SPEAKER_00 (13:49):
If you haven't been
on it for three weeks, you might
not know what one of thosebuttons does.
So we went to it line by line,and then one of the things we
did was then you know we setsort of a task for people to go
away and experiment and see ifthey can then come back and
share their best case.
For you know, that was part ofthe training session.
So none of that's rocketscience, that is that is just
training, yeah.
(14:10):
But you know, it does surpriseme how few people are getting
formal training on chat GPT.
Of course, the best place to gettraining on chat GPT or any of
these languages is ask it totrain you, um, and it will, but
even getting into that mindsetis you know.
So, how do you get usage up?
So training, encourage people tothink differently.
(14:33):
So ask encourage people to askChatGPT.
SPEAKER_02 (14:38):
Is it like like your
colleague example?
Is it almost brainstorming withthe team?
You know, what's all the mundanestuff, what's all the stuff that
takes time?
So proportionally, we spend alot of data entry time,
duplication, whatever.
Almost get those sticking out,so however you want to do it,
user cases to then try and useit to solve.
So back to one, and maybe whatwe'll talk a bit about in
(15:00):
podcast three of what otherbusiness challenges that I'm
trying to solve.
SPEAKER_01 (15:05):
100%.
You have to read right to leftwhen you're thinking about AI
and start with the problemyou're trying to solve because
too much of this tech is justtech looking for.
SPEAKER_02 (15:13):
Go down a rabbit
hole of back to pretty pictures
of cats on the moon.
Yeah, yeah, you end up in allthe nice stuff when you come to
the end of it, you go, actually,what benefit has that had?
SPEAKER_00 (15:23):
And and yeah,
another one is share success
stories is you know, whensomebody's you know within a
team has used it to do somethingclever.
But I you know, I think it'salmost an imagination challenge
for people.
So these things are incrediblypowerful, they can do a lot of
tasks.
But you know, the first time youstart using it, you sort of use
(15:43):
it once you think that's clever,but you actually don't do
anything productive.
SPEAKER_02 (15:46):
You've touched the
iceberg though at that point.
SPEAKER_00 (15:48):
That's clever, but
yeah, it's clever, but actually
it didn't save me any time.
And actually, I've now got apicture of a cat dancing on the
moon, you know.
SPEAKER_02 (15:55):
Lottery numbers with
that was surely a way to go.
Predicting lottery numbers mustbe that's the one that no one's
talked about that, have they?
Right.
Can we edit that out becausewe're giving away IP now?
I mean intellectual property, bythe way, that was so.
SPEAKER_01 (16:08):
I think I think the
the example that I was that I
heard, and I know the pace ofchange has uh changed over time,
has increased, but after theintroduction of electricity, it
took about 30 years in theWestern world for steam and for
horses to be replaced.
And people have effective waysof doing things today without
AI.
There's no reason for them tochange them unless they see it
(16:29):
or they they they feel it.
Or dislike it that much.
Or dislike it that much.
And some people might dislike itbut be afraid to change because
they don't want to lose theirjob.
So there's, you know, theorganization has a lot of work
to do, I think, in order toencourage people to experiment,
to make them feel safe toexperiment and and adopt these
things.
And it's a lot easier in whatyou would call digital first
businesses or digital nativebusinesses.
(16:50):
You know, an organization that'sbeen going 15, 20 years started,
you know, when I first startedmy first job, we had one
computer in the office that wasconnected to the internet.
We used to fax everything okay.
SPEAKER_02 (17:00):
Dial-up.
SPEAKER_01 (17:00):
Yeah.
SPEAKER_02 (17:01):
For those that don't
know what dial-up is, that's
when you had a you almost had aphone on a modem.
A modem, yeah.
Well, you might have a routernow, similar, and it used to
dial.
And when it was when it was on,you couldn't use the phone to
start with, could you?
And you were paying like 30p aminute or something stupid, and
it was like really slow.
Well, ChatGPC gave me a historyof the internet.
Um there you go.
(17:21):
For anybody listening orwatching, that's your your first
prompt.
What was your prompt?
Tell me about the history of theinternet.
SPEAKER_01 (17:27):
There you go.
But that's but you know, thereare legacy businesses that have
got lots of data or lots ofprocesses that exist from a time
even before email and theinternet, and then trying to get
AI into that is clearly going tobe difficult.
And you have all these startupsthat are coming in and just
doing everything with AI, thatgives them an agility that's a
bit of an advantage over the thebig organizations.
(17:48):
And from the research I've read,you look at China, you look at
India, you look at some parts ofAmerica, businesses there are
just doing everything with AIand outcompeting the legacy
businesses really quickly.
So to a certain extent, I thinkit it's important, it's
imperative almost for managersand leaders in organizations to
start getting people to adoptit, because otherwise you'll get
(18:10):
you'll get out outrun,outmaneuvered by somebody more
agile.
SPEAKER_02 (18:13):
Yeah.
And look, just to get back toagency, so where should firms
begin?
And we'll talk in the next bitmaybe about how we get started
with agents, but where shouldfirms begin?
So what things should they startto think about in terms of
workflows?
SPEAKER_00 (18:28):
Is it worth just
clarifying what an agent is?
It is, indeed.
So so if you think the we talkedabout the the large language
model, which you know, Gemini,ChatGPT, Claude, a few others.
You sort of run an individualtask on it.
As a as a person, I log on toit, I type into it a prompt, and
it does the task and it gives methe answer back.
(18:49):
Okay, so it's a sort of aone-to-one.
SPEAKER_02 (18:52):
Well that could be a
recurring task, it could be a
one-off.
SPEAKER_00 (18:55):
On the whole, you
could they don't do scheduling
these things.
So you could you can't, so forinstance, you can't say do this
task every Saturday at nineo'clock.
It's you do the task and itrequires you to be there to
manage that prompt and sometimesanswer questions.
An agent is is effectively an AInormally using exactly the same
(19:15):
technology like a as ChatGPT oror Gemini, where it is going off
and doing the task for you.
So here's here's a couple ofgood examples.
So within ChatGPT, there's anagent mode, okay, and so I have
a task where I have to go andgather some competitive
information from five websites,there's public information, and
(19:39):
I set the prompt going, I usethe same prompt, and I tell it
to go off to these five sites,it'll and it literally opens a
little window, opens the sites,scrapes the information, puts it
together and puts it in areport, and which it then makes
available for me.
Okay.
Another task uh an agent mightdo is we use it for some
(20:02):
software configuration.
So it goes on to the settings offor customer, we've we've got
these sort of prompts written,so it goes on and it might
configure, say, for instance,you know, a lunch break rule on
workforce management.
And you you give it the standardtemplate, it goes off, it logs
(20:23):
on.
You have to log on for itactually.
It then configures it, it might,but it's more likely to do 20 of
these.
It's not gonna do one rule, it'sgonna do all of the rules.
Then it goes and creates a shiftthat meets the criteria for the
rule and then confirms that it'scorrectly set up, okay, and and
it can go and run that.
So that's what an agent woulddo.
Okay, so it's a specific task ora specific set of instructions,
(20:47):
yeah.
And the the productivity gainfor the for the for the company
is is you know, let people usetheir imagination and go and
find out all of these individualmicro tasks.
So that particular task andhundreds of other ones like it,
like I have a job where I haveto go get information for five
competitive websites.
(21:08):
We're not gonna get a projectteam together, define, you know,
this is the power of it.
Um, we're not gonna get aproject team set a budget, set
some KPIs and some measurements.
We're gonna just give somebody atool, and when they get fed up
with doing this boring job on aMonday morning before they can
do the rest of their job,they're gonna they're gonna
train an agent and they're justgonna run that every Monday
(21:28):
morning, then they're gonna openanother window and then run
another agent to go and do theirnext task.
So that that's the differencebetween an agent and uh and just
as a standard prompt, andthere's one other sort of nuance
on that is those agents withinChat GPT or Gemini, they you
(21:49):
still can't schedule them, okay.
I'm sure it's coming.
So it sounds like an obviousnext step of every Monday Polish
report for me.
But there are other tools outthere, automation tools, which
basically can connect to OpenAIor or Gemini or Claude or any of
these models, go and perform thetask, and they can be scheduled.
(22:10):
So instead of me having to wakeup on a Monday morning at nine
o'clock and cut and paste thatprompt to go to those five
websites and set the agentrunning, there's a tool that
just wakes the agent up and doesit every Monday morning.
So, you know, the productivitygains are we we build we you
know, each individual builds asuite of agents to automate the
(22:32):
boring, mundane tasks.
And when they wake up on aMonday morning, it's already
there presented, and actuallythey start adding value by
reviewing the information andmaking decisions based on it.
SPEAKER_02 (22:43):
Any experience with
agents, James, that it would be
good to share?
SPEAKER_01 (22:46):
Nothing that has
yielded anything kind of
concrete, but I I think the theway that agents are going, you
know, everyone talks aboutagencai AI, and what Ian's just
described is is basically whatthat what that means.
AI is no longer it's no longersufficient to have an interface
you ask questions to and itgives you answers.
You want to it's goal-based now.
So you want to say, this is whatI'm trying to achieve, and can
(23:07):
your agent achieve that for you?
SPEAKER_02 (23:09):
And I assume in the
future, maybe the scheduling of
it each week produce me thisreport feels like a you know,
probably in this day and age afairly simple next step.
It feels like then the nextevolution from that becomes, you
know, scrape these fivewebsites, produce me this report
in this format, whatever,present it to the email it to
these people, whatever, but alsolook for wider opportunities in
(23:30):
other websites or across the webthat I've missed.
So what fill in my blind spots.
SPEAKER_01 (23:35):
Yeah.
Or I mean autonomous agents, I Iguess are probably the the goal
where it sees, oh, you've had ameeting come in your diary next
Friday, you're going to needthis information, here's the
report ready for you, or you ranthis report, I thought you'd be
interested in this as well.
SPEAKER_02 (23:48):
And there's a so
that's a personal one.
I also naively probably see aworld where there's business to
business ones and business tocustomer ones.
So at some point I get an emailtext, whatever it might be,
saying um Simon Edo, yourpersonal agent has negotiated
you a better rate with BritishGas because we've looked at your
(24:08):
contract, we see you coming up,we see you charge your electric
car, we think it's going to besunny for the next 12 months, so
we've got you this rate.
unknown (24:14):
Yeah.
Yeah.
SPEAKER_00 (24:16):
I I think, yeah, I I
agree.
Those all those things arecoming.
I think right now, though, forbusinesses, I you know, the the
some of the some of the tasksthat we just talked about that
you can get agents to do, theythey can save development work.
So nobody's ever got enoughresource to get all the you
know, everybody's got technicalrequirements to be built, or
(24:37):
they want their vendor, youknow, their supplier of their
software to change the theproduct.
Some of these agents sort ofallow you to bypass that so you
can build effectively your ownfeature to go to a software
tool.
SPEAKER_02 (24:48):
So and an individual
can do that just for the one
task that they're this is whereI get dangerous though, because
this is where the likes of me,who has a zero technical
ability, will produce somethingthat looks great, and then when
it breaks, has no idea how tofix it.
And that's when I think we relythen on the agent or the AI to
do it.
If they can't fix it, we're in aworld of pain.
SPEAKER_00 (25:09):
Well, you say that,
but I I one of the you we talked
earlier about adoption, I don'tthink we've completely closed
that off.
But one of the best ways to getpeople to adopt their agents is
and anybody can try this, uh youcan try this at home, as they
say, is you if you just give afew line prompts to Chat GPT's
agent mode, okay, and you know,set it going, you literally a
(25:35):
minute, give it a task, go tothese five websites and get me
this information, it will itwill talk you through what it's
doing, it'll say by um it'll saythings like I'm clicking on the
home button, but I can't findwhat I'm looking for, so I'm
gonna try this, okay?
And it could take uh 10-15minutes to do.
Okay.
Once you've done it once, youthen say to ChatGPT, could you
(25:56):
tell me how I could have made iteasier for you to have done that
and it and it'll say, If youtold me that that all the
information was on page calledthis, and give me the URL and
the five URLs, I could have doneit in a minute.
Okay, so they're they areremarkably persistent, so it it
it will keep going and and andand you don't get many failures
(26:18):
with them.
What you do get is you get long,you know, it's like you know,
and and yeah, open AI are payingthe compute cost for that, but
you do get long tasks that taketoo long.
SPEAKER_02 (26:29):
It's a logic tree, I
assume.
So when it hits a dead end,it'll reverse up and go down the
next one until it keeps gettingfurther and further and thinks
it's done it.
SPEAKER_00 (26:36):
It'll keep trying.
So if you give it a reallysimple prompt, it'll keep going.
But if you gave it a more directprompt, which is go to this
page, look at row seven andcollect that bit of data, and
it's in this format, and thenput it into this attached
spreadsheet in this format.
So clear and clear instructions.
Yeah, exactly.
Like any any anybody, anyassistant or coworker, you give
it a clear instruction, it'lljust go and be much quicker.
SPEAKER_02 (26:57):
So have we got good
examples that are out there at
the moment of agents that peoplecould copy where they're using
it in a specific example?
I think there's bits aroundfraud, is there out there,
there's bits around predictivemaintenance are here, so it it
almost knows when a chiller'sgonna break down in a
supermarket, so you can replacethe motor before it breaks down
on the sunny day and you can'tsell ice cream.
(27:18):
Are there bits like that thatpeople can start to go after?
SPEAKER_00 (27:21):
Yeah, well, some of
some of those are like forecast
engines, so they're like alerts.
Yeah.
So they're they're giving you analert.
So the so the another way tothink of that is you've got
tools that are triggeringalerts, you know, where they're
just you know, they've they'reforecasting what's meant to be
happened, and if somethingdifferent happens, they alert
you.
Yeah.
Okay, so the temperature shouldbe 25 degrees and it's it's it's
(27:43):
you know, and it's you know, theair con should be 25 degrees and
it's it's much colder than that.
So then the agent would go andautomatically act on that
decision, would go andreconfigure the you know, would
be a good one.
SPEAKER_02 (27:55):
Or call the call the
maintenance person to come and
fix it.
SPEAKER_00 (27:58):
Uh and there there
are people using AI to optimise
their saving money on theirelectricity bill by optimising
their aircon, so you know,across large estates and large
buildings, so that something'smonitoring it, it triggers an
alert, and instead of it sendinga message to the the maintenance
person to go up to the fourthfloor and turn the the air con
(28:19):
down, it just the agent does it.
Yeah.
So that so the the the agent isis taking a tu is taking an
action.
SPEAKER_02 (28:28):
Okay, and that uh so
yeah, the examples you you
talked about, yeah, chatbots,where they're just responding to
customer queries and they'vecome on a bit because they used
to be really binary, didn'tthey, of if you didn't give it
exactly what it wanted, it'djust kind of say, Oh, I'll put
you through to an agent, or it'dgo back into a loop.
SPEAKER_00 (28:48):
Uh and and here's
the thing.
So if I'm talking to a chatbot,and James was alluding to this,
and I say to it, you know, canyou can you give me effectively
FAQs, can you tell me what timeyou open?
Okay, that that to me is not anagent, that's just uh you know,
like a smart communication largelanguage model.
If you said, Yeah, listen, I'vegot a problem in my account, you
(29:11):
know, I'm frozen out, can youchange the passwords, or or
something like that?
It might ask a few questions todo security.
SPEAKER_02 (29:18):
Yeah.
SPEAKER_00 (29:19):
Okay.
It might not be happy with oneof the answers, so it asks the
second level question, and thenit might come back and say,
Yeah, fine, you've you've passedsecurity now.
I'm going to reset yourpassword.
Can you confirm when you youwe've sent you a text, you know,
all that sort of stuff.
Yeah.
It could send the text, it couldwait that, and when it's got all
the answers, it it's happy, itcan reset your password and then
(29:40):
send you the whatever, the thethe the temporary password.
So that's the difference betweenit that's they're they're good
use cases for agents.
But the agent has to take someaction.
It can't just be giving you ananswer.
That's it.
SPEAKER_02 (29:52):
And there's there's
lots of insecurity as well,
certainly in retail, isn'tthere?
If you shop at a self-checkoutnow, there's probably a camera
above you or on it that's Maybenot turn on for facial
recognition, but we'll be in thefuture to look at age profile.
There's some stuff in the newsabout people being on AI
recognising serial uhshoplifters and putting them on
banning lists or uh alertingother people on that retail park
(30:16):
or in that shopping centre.
So all that stuff seems to be,because of the cost challenge,
really pushing the user cases ofit and the technology.
You got any other examples,James, before we close?
SPEAKER_01 (30:26):
Yeah, I think I read
a case study from Domino's in
Australia, which is interestingbecause the Australian market is
generally considered to be oneof those that is lagging in AI
adoption, like um like the UK.
But it was in Domino's andthey'd used a camera to monitor
pizza quality as it was comingout of the oven.
And, you know, a simpleautomated tool would just tell
(30:47):
you the pizzas are coming outbad, go and investigate it.
But if you build into thatpizzas are coming out a little
bit burnt, I'm going to turn theoven down automatically, then it
it works more effectively.
And there was it was a franchisein Melbourne had used a
combination of demandforecasting using AI and some
cameras to monitor productquality.
They were getting pizza deliverydown under six minutes from
(31:09):
clicking order on the websitethrough to delivery to house.
SPEAKER_02 (31:12):
Better quality, I
assume as well.
SPEAKER_01 (31:13):
Better quality,
quicker, better customer
satisfaction, actually enablingthem.
You know, you almost get thepizza before you've clicked it.
It's you know, it's enablingthem to that's minority report
now.
Predicting pizzas, we're allgoing to start to get Domino's
delivery before we order it onthe TV.
We know we're going to get theseorders at at this time.
We pre-make them, they're readyto go fresh and hot and the
highest quality.
(31:34):
You know, it's redefining whatyou can expect from pizza
delivery because it's taking 25minutes off the delivery time.
It's you know, it's reallyremarkable.
And that's where a combinationof agents can really enable a
business, a physical businessselling physical products to
make a huge difference to whatit does.
SPEAKER_02 (31:51):
Perfect.
So we will close podcast two onthat point.
And everybody listening andwatching will now be having
pizzas delivered that theydidn't know they ordered.
So enjoy your pizzas.
I'm sure it'll be well made.
Hopefully, it's not come fromAustralia because it might be a
bit cold.
Thank you, Ian.
Thank you, James, and I willspeak to you again, James, on
podcast three.
SPEAKER_00 (32:10):
Yes, thank you.
Thank you, Sharon.