Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_02 (00:00):
Welcome to episode
three of the AI Value Track.
I'm Simon Heddo, and I am joinedby a returning guest, Ed Hogg,
who's the CEO of Solve by AI.
Hi, Ed.
Hi, Simon.
And a returning guest, JamesBoll, head of data and insights
at Rethink Productivity.
Hi Simon, thanks.
Beat me to the high.
There we go.
We're keen to get going on thisone.
(00:20):
So we're going to talk todayabout build versus buy on this
episode and what relates to AIstill.
James, you just want to, Isuppose, back to my mission on
these three series of keep itsimple for people and demystify.
SPEAKER_01 (00:34):
Build versus buy
very simply is well, do you want
to build this product fromscratch yourself?
Or are you going to buy it offthe shelf?
And obviously there's a bit of aspectrum there because you can
buy something and configure itin the middle.
But that's the decision you'regoing to have to make as a
business with your AI tool.
If you're looking at solving ageneric problem that lots of
businesses have solved before,or lots of other businesses need
(00:56):
to solve, generally you're goingto be thinking about buying that
in off the shelf.
You need something to triageyour customer service requests,
or you need to sort through yourdocuments.
That problem's been solvedbefore.
So you're probably going to wantto buy that off the shelf.
You might have a specificindustry challenge related to
your business or somethingactually that's business
specific, you might want to dosome configuration of that tool.
(01:20):
But generally, if you're solvinga problem lots of other people
need to solve, you're probablybetter off trying to find
something first that's off theshelf and unproven.
However, if you're dealing withsomething that's really specific
to your business, and inparticular something that's like
a strategic bottleneck for yourbusiness, it's preventing you
from growing and you don't thinkanybody else is going to be in
that situation at all, thenyou're probably going to lean
(01:41):
towards building it yourself,not only because it's not been
built and proven before, butalso because the IP involved in
building that is reallyimportant to your business and
you don't necessarily want thatto be shared around other
people.
And that's the decision you'regoing to make.
And I think the the way youwould make that decision.
SPEAKER_02 (01:56):
So IP again for
those pulling everybody up on
acronyms and all those kind ofthings.
Intellectual property.
That's what I would say it stoodfor.
Yeah, I'm glad you've said thatbecause I might have got it
wrong.
Internet protocol, is that an IPthing anyway?
So in this instance,intellectual property.
So your thoughts and ideaswithin that business that you
want to keep secret secret,monetize, sell it to somebody
else, whatever it might be.
(02:18):
Okay.
So the alarm bells in my head gooff when we start to talk about
bespoke, because I'm thinkingthat sounds expensive, that
sounds tricky, that sounds likeI'm going to need a load of
really intelligent people thatare going to cost me a lot of
money.
Sounds difficult to maintain.
So my my head goes to there mustbe some really key decisions
you've got to make to go downthe bespoke route.
(02:41):
I mean, I'm sure you've got anopinion.
We'll start with you, James, andthen come to Ed.
SPEAKER_01 (02:44):
Yeah, I mean, you've
just hit the nail on the head,
really, with the challenge thatleaders face when they're
thinking about AI.
Is if you're going to buildyourself, you need to make sure
you have the correct level ofexpertise within the business to
do it and to do it well, whichmight mean hiring lots of PhDs.
It might mean investing reallyheavily in your data
architecture, and it might meaninvesting really heavily in your
(03:07):
IT infrastructure because youmight need some really hardcore
computing power to do what youwant to do.
And so it is a lot of upfrontcost and it is a long-term
investment.
Whereas buying something andtrying to configure it, you can
probably get to the solutionquicker.
However, as an organization, youwant to be building your
organizational expertise in thisarea because we believe it's
(03:27):
part of the future and it'sgoing to be part of our
strategic advantage.
So we want a team that'scomfortable in it from the from
the off.
And so there's a balance to bestruck in in that decision.
Um, Ed, I don't know if youwould add to that.
SPEAKER_00 (03:41):
Yeah, I I think for
me you're looking at solving
more niche problems when you'rebuilding yourself.
There are some great tools outthere that you can buy in.
There's no point in us trying tocompete with ChatGPT on a large
language model or go out goafter developing our own
artificial generalizedintelligence models, which which
we've not we're not at yet, butit would cost us trillions to
(04:02):
trillions to get close to.
But there are some very nicheproblems that are very focused
within our organizations that wemight be able to deploy.
And there's tools that that helpus do it out there.
And what what we're looking foris is that kind of sweet spot of
is this niche enough that itonly really applies to us or uh
us and a very small number ofpeople around us, and therefore
(04:22):
it's going to be a a bettertrade-off, a better bottom line
effect for us to build itin-house rather than try to buy
it outside.
SPEAKER_02 (04:30):
Or if I was spitting
it in a positive way, there's a
problem that you could solve ina bespoke way that you could
monetize and sell onwards.
Oh, yeah.
Yeah.
So there might be a differentangle of we see this as a
wide-scale problem and we wantto solve it in our bespoke way
and then monetize it movingforward.
Because I suppose the other wayto look at it is if it's
(04:52):
bespoke, it adds value to thatorganisation because it because
it's theirs back to the IPdebate and however you do it.
Okay, yeah, got it?
Makes sense.
So I will underline this andit's directed at yourself, Ed,
without getting technical.
When organisations think, okay,I understand the problem, it's
really niche to us, or we thinkit's got you know bigger legs
(05:15):
and we want to have it uh byourselves for ourselves for the
time being before we sell it.
From an architecture point ofview, is that a big round of
decisions, meetings, things thatneed to be considered?
You know, what types of thingswill people need to think about
when they're saying, yeah, tick,we're going bespoke, tick, we
(05:37):
know the niche problem we'retrying to solve.
You've got some tools andthings, but what what does it
sit on?
Where does it live?
What does it, you know, whatdoes it look like?
SPEAKER_00 (05:45):
I think the first
thing you need to do is is have
an oversight committee, which isa just uh business jargon and
not really useful, but you justneed to have a group of people
who are gonna overlook how themodel's performing at all stages
from a training point of view,from a performance point of view
when deployed, looking at itfrom an ethics point of view.
Yeah, we could we quite oftencome across those problems.
Um so you need a group of peoplewho are able to step in and go
(06:08):
no, because if you train it inthe wrong direction or train it
with some data, you could getsome some effects you definitely
don't want.
You've then got kind of the youyou want a strong leader of the
project, and they need to lookat your architecture and work
out a way that technically youcan embed it with your data.
SPEAKER_02 (06:27):
Because it's got to
live somewhere, right?
It's got to live somewhere.
It's got to live in a databaseor whatever on a server
somewhere in the cloud.
Um it's sure my technicalignorance here, but you know,
yeah it you've got to be able toaccess it in a safe and secure
way, is my minimum kind of.
SPEAKER_00 (06:39):
And some of these
models need a lot of
computational power to run.
You could need to use AWS'sbiggest uh Amazon Web Services.
Sorry.
You have that there before me?
Biggest server or Google Cloud'sbiggest server in order to run
for a very small amount of time.
So you you need to put itsomewhere that you're able to
move the data into and out to,uh out of that meets your
informational security officeror whoever in your organization
(07:01):
is responsible for data'sthreshold or of data security.
The other thing you need tothink about is the skill set
that you have in-house.
So one of the architecturethings you need is an architect.
And coming back to what JamesJames said about PhDs, you need
somebody who is clever enoughand and able to manipulate these
tools in the right way.
(07:22):
And thanks to those they aregetting more expensive, but they
are out there, these people, andand you can can find them and
get them to come and work foryou on a very focused project if
it's interesting enough.
And most niche projects are,most data scientists love niche
projects because they're they'repushing the boundaries of of
what's been done before.
SPEAKER_02 (07:38):
Okay, so I'll come
to you in a second, James, but
just let me simplify thosesteps.
So it needs to you need tounderstand where it lives.
I think there's probably a pointto make of you talked about
Amazon Web Services, GoogleCloud.
You need to try and understandthe cost because I assume you
could start to run these things,be using massive computing
somewhere, and then you get thebill through and go, Oh my god,
(08:01):
I didn't realise it cost me 10grand for those two minutes of
computing.
I'm making the numbers up, butit in relative terms, there's a
there's a cost when you'repulling that resource.
So we know where it lives, we'vethen got our governance in
place, a typical project workinggroup with the right people, the
right skill sets, looking overall the bits, and then we've got
a really clever people that aregoing to start to play with the
(08:21):
data to help solve the bespokeproblem that we're building this
for.
Yeah.
Yeah, that summarizes it reallynicely.
Perfect.
SPEAKER_01 (08:29):
Does that fit with
your understanding, James?
Have you got any other points toadd?
Yeah, I mean, the only thing Iwould add is that most
organisations, I don't think,have got much of that in place
when they when they start.
And so it's the same with hiringthe talent.
You just start looking at it nowbecause you will need at some
point.
And a couple of tips like mostof your legacy data will need
(08:50):
some kind of curation andcleansing before you can use it.
And the cure for bad data is notmore bad data.
And that could be years ofbigger data.
Oh, absolutely.
SPEAKER_02 (08:58):
That could be, you
know, if we think WF workforce
management, I got there myself,I corrected myself.
We think workforce managementWFM, for those in the know, you
know, you kind of need two yearsof data as a minimum, and people
struggle with that, even if it'sas binary as till transaction
data, it's like it's got holesin it, gaps.
So some of these bigger projectsmight need three, four, five,
six, I say ten.
(09:18):
We've had a pandemic in thereand stuff, but big, big chunks
of data at a potentially verygranular level.
SPEAKER_01 (09:24):
Yeah, yeah.
And it can be inconsistent, youknow, it could have several
different ways of categorizingthe same thing.
It might be labelledincorrectly, it might be
duplicated.
You really need to be on top ofthat, and you need to have kind
of change control mechanisms aswell to ensure that you're
managing it effectively.
The other thing I think youwould think about is where's the
data coming from?
Because customer interactionswith your business naturally
(09:44):
generate data, but also youmight be able to get data from
vendors, from suppliers, youmight be able to get data
sourced from third parties.
How do you integrate all thatinformation together?
How do you connect it?
And how do you keep the bitsthat need to be private,
private?
You need to consider all thosethings as well.
So, I mean, it feels big, butthe but the best step is to just
start doing it.
SPEAKER_02 (10:04):
Yeah, so there's
almost subsets of let's call AI
the program, and then there'slots of projects within the
program.
So there's a data project,there's a where it's gonna live
project, there's a what's thecost mechanism look like, what's
the hiring the talent, etc.
So the it it's big stuff, butyou can break it down into
manageable chunks to getthrough.
Okay.
So we we do all that, let's castour minds forward.
(10:27):
We've got something we can useand work, and as ever people
want to do a pilot, and uh doesit work, doesn't it work, how
quickly can we go?
People tend to get stuck, Ithink not in AI, but generally
in pilots.
Sometimes it's because there'sno motivation, sometimes there's
maybe not stuff clear.
(10:47):
So, Ed, in your experience, howdo you un unblock that stoppage?
How do you or even better, stopit, stop it happening full stop
and set people up for success inthose pilot situations?
SPEAKER_00 (10:58):
Well, the first
thing you have to do is measure
where you are right now beforeyou even jump into a pilot.
You have to understand whatyou're trying to improve, you
have to get your baseline.
Once you've got a baseline, thenext thing is is having a clear,
clear measurement of whatsuccess looks like and where you
are tracking towards that,constant feedback on that,
making sure you're reviewingthat project speak is probably
(11:20):
30, 60, 90, but just when it'sdeployed, when you go through
having those clear gateways ofright, we're now going to look
back and make measure our KPIs.
The other thing is to make surethat it is measurable.
Yeah, when you're buildingbespoke stuff, when you're
working with people, you need tohave an overview of right,
actually, can I measure whatimpact it's doing?
Can I measure what it's doing?
(11:40):
Explainable AI is quite aproblem.
And it's getting easier andeasier to do that.
You need to take advantage ofthose tools.
SPEAKER_02 (11:46):
Trevor Burrus, Jr.:
Explainable in terms of what's
the same as the same thing.
SPEAKER_00 (11:48):
AIs are
traditionally a black box.
They are a group of new neuronswithin a black box, which is a
which is effectively a decisionpoint within each AI which is
making which is making adecision, and you've got these
thousands of different decisionsgoing on in there, and it's very
hard to see how someone's brainis working, and it's the same
problem with AI.
But there are tools out therewhich can tell you this is why
(12:10):
the AI made that decision, orthis is why the AI did that.
SPEAKER_02 (12:13):
So this is kind of
justifying how we got to the
answer almost.
Or people wanting to daisy chainback to say we put this in, we
got uh this answer, but actuallyI want to understand how it got
to the answer.
So it mathematically, you know,one plus one equals two, we can
see the daisy chain, we put oneand a plus and a one in and we
got two out.
But it's clearly far morecomplicated than that.
SPEAKER_00 (12:34):
Yeah, yeah, there's
some there's some really good
mass without getting too geekyand bringing game theory out for
a second, yeah.
There's some really clever toolsout there that that are able to
say the reason that the AIthinks that this picture of a
dog is a dog is because 98% ofthe input was recognised as dog
and 2% of it looks like cat.
And and it's able to show youthe breakdown of what how the
(12:57):
reason the reasoning.
The reasoning, the the modeling,the models thought on it, and
therefore it output outputtedthe the right one.
And then I think a lot of thepeople who this podcast is
targeted at are thinking aboutkind of a top-level number.
They're thinking about eBit DAR,they're thinking about profit,
thinking about reporting up toan executive committee.
And I think it's worth reviewinghow uh at some points throughout
(13:22):
the project.
How is your project going toimpact eBit DAR?
Yeah.
Yeah, Deloitte says 68% ofprojects that are done as
Nietzsche and bespoke have ae-bit data increase associated
with them.
It's really worth making surethat you are uh you are tracking
that and you are part of the 68rather than part of the 32.
SPEAKER_02 (13:41):
Back to computing
costs, that could absolutely
kill your business case if runit the run cost you hadn't
factored in, or it's morebecause the data set's bigger or
the cost of computing goes up.
There's all those things thatcan, I suppose, take you off
kilter.
SPEAKER_00 (13:56):
The inference costs,
the cost to use the AI on a
daily basis, are are yeah, theythey tend to have the biggest
impact as to whether you'regonna have your Evidar win.
And and yeah, so it's reallyworth monitoring that.
SPEAKER_01 (14:10):
Yeah, and it's worth
thinking about it before you've
even started the pilot, becausethe biggest pilot killer I think
is probably finance saying, youneed how much now?
If you've not got a broad senseof how much it's gonna cost to
deploy and maintain and operateup front, then it might be worth
taking a step, making sure youknow that before you embark on
the pilot.
Because you talked in episodeone about challenges being
(14:32):
cultural with rolling thesethings out.
Often within a culture, the thepeople that has the biggest
say-so, yes or no, is thefinance director.
And yeah, if you're suddenlygoing and asking for a couple of
million pounds because you wantto return three or four million,
then you're gonna have acomplicated conversation if
you've if you've not expressedthat upfront and got buy-in
already.
SPEAKER_02 (14:49):
I suppose it leads
us nicely into that transition
from proof of concept toproduction or rollout, depending
on how you phrase it in yourorganisation.
And you you touched there,James, around some high-level
governance control, sounderstanding the the full cost
of the project almost before youembark, or at least an
indicative view so that peopleknow what they're signing up to
(15:10):
as you get there.
And you know, those will change.
That that's life.
Things go up in price, notparticularly down in price, and
assumptions are assumptions, sowe can crystallise some of those
and start to be more accurate.
But are there any other thingsthat you've seen or you'd you'd
ask people to think about tomitigate some of those really
difficult conversations at thepoint of this looks good, we
(15:31):
think it works, we think it'sgot legs.
All right, we need a six-monthbreak now before we can roll it
out.
SPEAKER_01 (15:37):
Yeah, well, I mean,
I think Ed answered this
question really well in his hislast answer.
I would kind of build on what hesaid around the ethics.
I think you need to establishfor your organization what your
ethical position is around AI,and that is, you know, how much
oversight do we need of thedecision making that Ed talked
about?
It's how are we going to checkfor failures and how are we
going to react to them.
(15:58):
And remember, when most peoplethink about ethics in AI, they
think about bias in decisionmaking, which is really
important.
But there's also ethics involvedin if your AI isn't making the
right predictions all the time,that has an impact on your
customers.
What are the ethics of that?
I think you need to have thesediscussions up front and have
them early in some detail andestablish how you feel about it
in order to not have that painlater.
(16:20):
And you definitely don't wantthat pain a year and a half
after you've rolled it out.
And then the obvious point, Ithink, around communicating
continuously.
You talked about in episode onepeople wanting to be involved in
these decisions and know what'sgoing on, you need to be doing
that.
SPEAKER_00 (16:36):
I think I think for
me, with regards to governance,
scope creep is the is theclassic project problem.
The silent killer.
Yeah, I I think and the same istrue with AI.
If you try and you define yourscope, only give it the data
that's relevant for that, trainit on that.
If you then try and ask it to dodifferent things, you sometimes,
(16:57):
but quite often, always, need togive it more data, whether
that's you are asking it nowthat something that requires
weather data or moon cycles orwhatever you need to add, you do
need to add that data in, you doneed to throw it in there.
And that's that tends to be aretraining or an additional
training cycle, and you lose thethe eBITDA wins, your ROI.
(17:18):
The other thing I'd focus on aswell on governance is
reliability.
Yeah, we have these SLA useexternally with suppliers, but
SLA.
Service level agreements of theamount of uh you're challenging
me there.
Uh I had to think.
Yeah, the service levelagreements where you say 99.9%
of the time this has to be up.
We're rolling it out to staffinternally, even if it's only a
(17:39):
small organisation, you need tomake sure that you're always up
and therefore you need to havereliability as part of your your
governance of your project.
SPEAKER_02 (17:48):
And I struggle a
little bit with the word pilot
because it kind of gives theconnotation it's optional at the
end.
So I think lots oforganisations, pilot really is
phase one rollout, it's test andlearn, a couple of a couple of
locations, whatever are going tofeel the pain, and then we're
gonna move through.
Do you agree?
Or does is pilot typically a orstop then if it doesn't work?
(18:10):
Do people stop?
SPEAKER_00 (18:11):
It it's hard with a
lot of the build projects to do
a pilot.
You are because the cost isloaded.
The initial cost is is is quitea lot up front.
I mean you can build somethingas a research project and it not
work and you just cut yourlosses at the end of it, but
it's it's not a pilot, it's abuild and then a deploy stage.
(18:34):
And I think that one of thepotential options for using
somebody else is that they'vebeen through the build phase,
you don't have to pay for that.
Yes, there are perhaps longerthe the long run costs of it are
increased, but the upfrontinvestment is so much less, and
and that's often a decision youneed to make.
And maybe if there is a tool outthere that you can use for that,
(18:55):
then then you should go theother way.
SPEAKER_02 (18:57):
So yes or no answer
only allowed now.
I'm the host, so I can set therules.
Does a pilot have to be perfectbefore it goes to production or
rollout?
SPEAKER_00 (19:08):
No, there's that's
it.
SPEAKER_02 (19:10):
We're done.
unknown (19:11):
No.
SPEAKER_02 (19:11):
Thank you.
I agree.
Go on, I'll let you carry on.
There's no such thing as perfectwith these things.
Exactly.
So again, the point you made,Ed, keep the scope tight because
that can cause that creep of orwill it we'll have an extended
pilot or we'll have a phase two.
So keep the keep the scopetight.
When you've achieved your, Idon't know, 80%, is it the 80-20
rule?
(19:32):
Actually, that might be goodenough because you can carry on,
carry on to get to 90% to getlittle return back to your e bit
R ROI numbers and even 95% toget marginal gains.
That's the bit that 20%, Ithink, is the bit where
sometimes people fall in the gapof trying to get a bit more, but
it actually in relative termsthey've got most of the benefit
(19:52):
already to go.
SPEAKER_00 (19:54):
With AI like most
projects, Moscow, the must,
could, should, would kind ofstuff.
The must, defining what yourmust is, is is effectively your
scope here.
Yeah, yeah, and everything else,if it does it, great, but yeah,
it will continue to learn, youcan continue to improve on your
own.
SPEAKER_02 (20:12):
You need strong
leadership to enforce that
because otherwise everybodyfalls back to wanting to wanting
everything.
So we talked about IP before tointellectual property.
You've got potentially partnersthat are coming in to work with
you, you've got new hires teams.
So, how do you go around kind ofprotecting that?
If if you're employed, there'stypically clauses in contracts
(20:32):
around the IPs with thebusiness, it's not personally,
etc.
But there must be a risk withthis moving so quickly that
people's ideas all of a suddenappear in a different tool, a
different bespoke platform thatare very similar?
SPEAKER_00 (20:47):
Yeah, difficult one.
It it's often case by case, IP.
Uh it's what intellectualproperty are you providing?
What you often don't want to dois bring in a partner to train
on your data and then yourcompetitor use your data to get
ahead.
So you don't want yourcompetitor to be succeeding
(21:07):
because you outlaid the capitalto build the model.
So definitely protecting that isis really important with AI.
I think very clear what is thewhat is being used as a training
training data set and what'sbeing used as a deployed data
set, what is uh and what isavailable to the public, and
what what that needs to be clearin the contract.
(21:29):
You need to be very clear as towhat is the information, the
data, the architecture, themodel, the current training
state of the model that is beingbrought into the relationship
from outside from third partiesand what's being uh used here.
There are some very publicexamples where over the last
couple of weeks, as we recordthis, that your IP is is
(21:50):
available or that people havehad pretty high-level security
risks from publishinginformation inadvertently to the
to the web and it's been takenadvantage of by actors who who
are bad, and therefore, yeah,you really need to be cautious
of that.
But I think for with IP, youjust have to make sure you you
(22:11):
spend the time before you startworking out what what is in
scope, where is my data going tobe used, where is and making
sure that you maintain therights to your core data not
being used to benefit yourcompetitors.
SPEAKER_02 (22:24):
Yeah.
That makes complete sense.
Clearly, take legal advice isthe the underlying disclaimer in
that piece.
SPEAKER_01 (22:30):
Don't use chat GPT
to check the contract.
Well, maybe as a start point.
I mean, I would argue mostbusinesses will be partnering
with people to help them buildbespoke stuff to begin with,
because they won't have theorganizational capabilities to
do so.
Yeah, and so you need to buildthat partner relationship from
the off and know what what termsyou're going to be dealing on.
And again, it's it's one ofthose conversations, even if
(22:52):
you're thinking two years downthe line, start talking to
partners now.
SPEAKER_02 (22:55):
And a good partner
should have that conversation in
the early days because theyshould be used to the
conversation, they should beused to people, so they should
almost have the the answerspackaged and however they deploy
it ready to go within thatmodel, right?
So that's good.
And then if we're going to buildbespoke chains and we're
starting to scale it, how do westop the organizational blocker?
(23:17):
So how do we stop theorganization when it becomes
more well known, slowing itdown?
SPEAKER_01 (23:21):
Yeah, I mean, you
talked in episode one about
cultural challenges being thebig the big problem and and this
being change management, um,which I think is is really key.
But London London BusinessSchool uh published something
talking about the four horsemenof the AI apocalypse being the
four things that will stop yourproject from scaling
effectively.
(23:41):
One is finance.
Yeah.
You run out of money.
SPEAKER_02 (23:43):
Run out of money is
not too much more than we
thought.
SPEAKER_01 (23:45):
Yeah, or we can't
evaluate this effectively.
Compliance is another onebecause it's very difficult to
understand the risks inherent inan AI tool.
And I don't know if Ed wants tocome in on that in a second.
HR, because they've not hiredthe right people to enable you
to scale it, or it's hard tofind them.
And lastly, IT not getting theunderlying infrastructure in in
(24:06):
place.
You know, data needs to bemanaged collectively.
You need standards and processesfor data collection, data
curation, and you needcatalogues and ways to manage
what's in there.
And those four, those fourdepartments can all block
something from scaling becausethey've not done that work,
which then goes back to theoriginal conversation of when
you start out out on your pilotor your proof of concept or
(24:29):
phase one rollout, whatever youcall it, you need to be thinking
down the line, okay, what's thisgoing to look like?
And engaging people from thestart.
SPEAKER_02 (24:36):
I don't know if
you've got any builds on those,
Ed.
SPEAKER_00 (24:38):
Yeah, just I I agree
with everything you just said.
Uh, you summed up pretty wellthe the compliance stuff, it's
really important to engage youryour chief informational
security officer in in thisstuff from from the very
beginning.
There are some very easy thingsthat you can do in compliance to
ruin it, particularly with withregards to ethics, with regards
to uh exposing your data in anuns unsecure way, making making
(25:03):
some pretty there's it's prettyeasy to make some some pretty
catastrophic errors if you ifyou don't just follow some basic
basic rules and some basic stepsaround around data security and
about AI security.
So I definitely spend some timewith with somebody who's trained
in that area.
SPEAKER_02 (25:18):
And and ethics seems
to be one of those things where
there's there's sort of laws orEU directives, and you know,
again, it'll always be slightlyprobably different in the
states.
So that that again is somethingto keep track on because it
seems to be something that's amoving target all the time.
SPEAKER_00 (25:33):
Yeah, the AI EU Act
was passed very recently and as
is law now, and is somethingthat you do need to be aware of
if you're deploying any of yourmodels in inside the uh EU.
Not as prevalent in the UnitedKingdom, definitely not as
prevalent in the US, differentlaws in different places.
You need to make sure GDPR isand California workers' law and
(25:54):
all those kind of fun laws thatyou need to be aware of.
But they all essentially meanyou need to protect your
employees' data, never providemore information out than from
personal data than than isneeded, make sure you're
training on uh trainingsensibly, you make sure that
you're always within a scope andthat there's a defined reason
for using it.
So I think those are all thekind of key things you need to
(26:16):
take take care of.
And if you are doing those, thenthe others should should come
along.
You should be able to get HR andfinance to to buy in because
you're following some verysensible principles.
Yeah, compliance doesn't andgovernance doesn't have to be a
hindrance, it actually can be ahelp sometimes.
Sometimes perfect.
SPEAKER_02 (26:35):
Well, on that note,
I thank you, Ed, for your
contribution again and James.
Again, key takeaways for me areback to the first part of this
conversation in in podcast threearound being really clear of why
you'd buy or why you'd build,and then that kind of sets you
off on a on a path with all theother things we talked about.
So hopefully some really goodinsights there for the people
(26:57):
watching and listening.
If they want to find out more,Ed James, so James Bowl, Ed Hogg
on LinkedIn.
Edward Hogg.
Edward Hogg.
So Simon Heddo, we're all onLinkedIn.
I'm probably the least uhfavourable of you to reach out
on any technical or AI advice.
Ed and Ed and James far betterthan me, but we're we're all
there.
Feel free to contact us andconnect.
And thanks for watching andlistening.