Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Peter Warren (00:05):
Hey everyone,
welcome back to part two of our
series uh talking about AI toROI.
This is part of our ongoingenergy and transition talks uh
here at uh CGI.
Uh my guest today uh from partone and part two is Fred.
I'll let you reintroduceyourself again.
Fred.
Frederic Miskawi (00:21):
Hi, Peter.
Fred Miskewi.
I'm part of our global AIenablement team.
I lead our AI innovation expertservices globally, and that uh
gives me the the deep luck ofbeing able to work across
geographies, across the world,across teams with clients,
different industries as well.
I've been involved with ourartificial intelligence in one
(00:41):
way or another since the 1990s.
Peter Warren (00:44):
Everybody thinks
uh AI may have started right
now, but of course this industryhas been using machine learning
for a long time.
So we understand the the earlydays of it, and CGI has been
part of that as well.
In part one, we covered off afew things about is my data good
enough?
We we touched base on you knowthe use of the best algorithm
for the job uh idea and concept.
Don't don't necessarily use asledgehammer to uh put in a uh
(01:07):
put in a screw uh type ideaabout KPIs.
Uh today we're gonna hit into acouple of interesting concepts.
Uh the shift between everythingneeding to be a large language
model, like one of the bigsuperscale uh multi-scalers or
uh hyperscaler systems versussmall language uh models.
When do you go to a smalllanguage model?
When do you determine uh dothings that are more
(01:29):
deterministic and morecontrol-based?
How do you manage that uhdecision process and what fits
what?
Frederic Miskawi (01:36):
Great
question, Peter.
And that goes back to what Iwas mentioning in part one,
which is the best algorithm forthe job.
We're talking about amulti-model ecosystem,
multi-agent ecosystem.
And what we're seeing and whatwe've seen evolved very quickly
organically, is the cost andenergy usage associated with
(01:56):
these large language models canbe quite substantial.
So, how do you manage budgets?
How do you manage the overallcost of the solution?
How do you manage the also theresponse time?
Uh that can performance canalso influence what kind of
algorithm you're leveraging.
So we're seeing this shifttowards smaller models,
on-premise models, working withuh uh on-cloud or hyperscaler
(02:19):
models.
Uh, we're seeing quantizationhappen with those models so that
you can get them as small asneeded to work on a device.
Um, even six months ago, theon-device models were not
necessarily great, but you'reseeing that evolve uh very
quickly.
You see the capabilities evolvevery quickly to the point where
(02:40):
now you've got Nvidia comingout with things like Jetson
Jetson Thor, uh which will bepowering a new generation of
walking uh models, which arethese uh bipedal robots that are
coming out in the next 12 to 24months.
Peter Warren (02:55):
Yeah, we were we
just saw the uh I guess it was
in Beijing they had the robot uhOlympics uh for the first time,
and uh it was kind of a mixeduh mix of things.
But uh I suppose that's a verydramatic example of edge
computing.
I mean, our industry, uh botheverything from oil rigs to uh
uh energy production rightthrough to smart grids uses a
lot of uh edge computing type oftechnology, not all of it
(03:19):
being, you know, major computersystems, some of it even being a
little legacy.
How do you see those types ofcomputer systems evolving as we
move forward?
Frederic Miskawi (03:29):
Um you're
gonna see an evolution of that
ecosystem.
We're already seeing it in thealgorithms.
Um, you're gonna have a widevariety of systems that are
powering our enterprise,powering our networks, powering
our various uh assets across thecompany.
Uh, we're seeing it, even forus as consulting firms, we're
starting to see that evolutionoccur.
(03:49):
Um, as we're talking about theenergy and utility industry,
you're gonna see this deploymentof bipedal robots increasingly
increasingly powerful andcapable.
Um, what we're seeing in thelab today are robots that walk
and talk like you and I, uh veryfluid, able to do martial arts
(04:11):
or able to dance, able to movevery fine objects and operate
equipment in a way that is uh alot more deterministic than it
was in the past.
That level of ecosystemevolution is what we're seeing
happen today.
Um, and what you're not seeing,what's in the labs today that
we get glimpses uh of with thework that we do is going to
(04:35):
truly revolutionize the industryand the way that we operate.
Uh you're gonna have verydangerous solutions and jobs
that are handled increasingly byteams of humans and robots.
And where if a if a robot getscrushed, uh you're gonna have a
lot less uh heartaches than if ahuman co-equipe um co-worker
(04:59):
getting hurt.
So you're gonna see just bysimple need these this evolution
occur of new algorithms, newways of of running these
algorithms in our analog worldoccur and develop over the next
two, two, three years.
Peter Warren (05:15):
The mining
industry has been a big adopter
of uh robotics and self-drivingvehicles.
Um this industry has been a bigadopter of the quadrupeds, so
the four-legged uh robots, umthe robot dogs, as they're
sometimes called.
We see those in harshenvironments.
Um, even just looking at edgecomputing, let's say a static
device, something on the edge,making a decision about do I
(05:37):
turn this electricity on, do Iopen that dam, what do I do,
that type of edge computing.
Um, and you made an interestingcomment.
We were saying that some ofthese new models will even work
on your very old Mac.
Uh it's not that people needthe latest and greatest uh
NVIDIA technology in every case.
How do you how do you seepeople moving forward with some
of these smaller systems, maybeon some more affordable
(05:57):
platforms?
Frederic Miskawi (05:58):
Yeah, and that
that's uh I was mentioning
earlier about quantized models,the ability to take a small
model to begin with, and thenkind of uh streamline it, filter
it, remove some of theunderlying parameters in order
to get it as small as possibleto run on a smaller, weaker
device where we can run some ofthese more powerful models, even
(06:21):
on CPUs.
It might be a little slower,but it it works.
Um, with that technology,you're always gonna be dealing
with a statistical model.
So you've got to be able towork with these smaller
quantized models running onthese older hardware.
Um, and you've got to work withthem in layering to make sure
that you get a little bit moredeterministic behavior out of
(06:42):
it.
And that's where agents comein.
So if you have a very smallmodel that can run on device,
that is really the brain ofsomething that's a little bit
more deterministic in the in thebody of what we call an AI
agent, then you have the abilityto run decisions, binary
decisions, more complicatedcategorization on device.
Uh so you've got very targetedneeds.
(07:04):
And for very targeted needs,you can do that on older
machinery.
And and that means that youdon't have to wait.
You don't have to capsulize amassive digital transformation
in order to get the benefit ofthe technology.
Peter Warren (07:20):
And you've talked
about layering, and I think
layering, you know, you'veexplained the very s smaller,
the edge, and that's going toprobably continue to expand and
improve, as you mentioned.
Um if you start and you startto look in things, you referred
to in uh a concept aboutenterprise neural meshes and the
use of a digital twin andactually stacking those and
making a digital triplet, whichis a concept that uh uh Diane
(07:42):
Gushu and yourself have broughtforward and talked about quite a
bit.
That really is the layering.
And when I explained that to acouple of executives, they've
said, well, finally I see avalue to me for AI because it's
actually helping me versus maybethe people in the field 100% or
the different layers.
Do you want to explain thatlayering really right from the
edge, right to uh helping theexecutive decide what to do?
Frederic Miskawi (08:04):
Yeah, our our
vision and what we're working
towards, at least internallyfrom a client zero story
perspective, is the enterpriseneural mesh.
It's the near real-time view ofthe enterprise.
And we've been seeing inklingsof that over time.
But the idea is to be able toanswer any and all questions
that executives might have,stakeholders, investors,
(08:26):
analysts on what's happeningwithin a given uh enterprise.
So for us, what that means is,for example, 21, 22%, I think,
of our revenue comes from IP.
It's to be able to see and havea visibility of the value
that's being delivered, uh, howmany people are working on it,
what kind of value is beingdelivered, the quality that
comes out, and see it in a nearreal-time basis.
(08:48):
These models and thisdecentralization of
intelligence, um, both on legacyhardware as well as new
hardware, as well as upcomingbipedal robots that will become
not breathing but moving datacollection engines, we're
feeding all that into thisdigital triplet that gives you a
(09:09):
view and understanding of theenterprise so that you can make
decisions.
And also, most importantly, youcan start running uh
simulations where when you havethat level and type of data
collection and the layering thatcomes with it, you can start
looking at, well, if I were tomake this decision, what would
occur with the enterprise?
What would be the impact ofthat particular strategic
(09:30):
decision?
That's what we're seeingevolve.
The technology enables you todo that, it's already there.
And now it's becoming more of ahuman transformation story.
The technology is moving andhas moved beyond our ability to
absorb it.
Uh, that's what I see day inand day out.
Um, we're just working from anorganizational change management
(09:51):
perspective with teams,individuals, clients, client
teams, building that capabilityunderstanding of the technology
so that we can absorb features,functions that were released
several months ago, severalmaybe even years ago.
So that's what we're seeingright now.
It's that that human evolutionof understanding of this
(10:12):
technology, the absorption ofthe technology to work towards a
enterprise neural mesh, areal-time, near-real-time view
of the enterprise.
Peter Warren (10:22):
Yeah, it's
interesting.
You mentioned customer zero.
So for those that don't knowwhat we're referring to there,
that's we're we're doing this toourselves.
Uh we're actually modifying howwe operate internally and how
we run.
Um but simultaneously, as youmentioned, we're working with uh
clients that are sort of firstmovers in those areas and
touching base on it.
Um wrap this up, where do youthink, you know, if you used
(10:45):
your crystal ball, uh we see abunch of both good things and
bad things today in the news?
There was talking aboutdisinformation uh from certain
websites, uh specifically comingfrom Russia, trying to train
train large language models,more the public ones on stuff.
Um, you know, that's probablypropaganda, that's uh their
point of view.
Uh how do you sort of manageall of these things as you move
(11:07):
forward?
When do you, you know, evenyour personal life, how do you
manage AI and how do you seecompanies managing this as they
go forward for, again, dataquality and getting the right
outcome to do the right action?
Frederic Miskawi (11:19):
Yeah, in two
words, healthy paranoia.
And and funnily enough, I had asimilar conversation with with
my oldest son this morning onhealthy paranoia.
These solutions are amplifiers,they accelerate access to
knowledge information, whetherthat information is accurate or
not, whether that information isintentionally inaccurate.
(11:43):
And with healthy paranoia,you're building a filter.
You're building filteringlayers between yourself and this
technology and potential actorsthat want to effectively
brainwash you.
And with healthy paranoia andunderstanding that this
technology can be used, can beand abused for amplification.
(12:04):
Uh, this technology is notnecessarily accurate either, not
always.
It will get better, of course.
But right now, what we'reseeing is that we are dealing
with statistical engines thatmay or may not always tell you
facts.
And with healthy paranoia, youcan start asking questions,
validating information, doublechecking, having multiple
(12:26):
sources of information.
So we're even doing that in oursolutioning.
We're bringing in multiplemodels, multiple hyperscaler
models, as well as on-premisemodels within the confines of
the same solution, again, tohelp with deterministic behavior
and more accuracy, uh, dealingwith biases and ensuring
(12:47):
transparency.
So we're going to continue tosee these news stories come out,
and this technology will beused to manipulate, will be used
to steer uh groups of humanstoward a particular uh
realization.
And it's up to all of usindividually to realize that
this technology is powerful.
(13:08):
This technology has to bequestioned, and we have to build
healthy paranoia as we moveforward.
Peter Warren (13:15):
Yeah, no, it's a
good idea.
Even when you're using uh justthe data within, and we'll we'll
wrap up here in a secondtalking about going back to the
first question about data, dataquality, and governance is that,
you know, um in organizationsuh maintaining the data, you
said you did in the first part,we we didn't have to start with
pristine data, but it'scontinuing to evolve.
So that requires a bit ofchange management in the
(13:38):
organization.
So we see the companies thatare being most successful with
this type of technology arebeing very agile in the way they
work, are being restructuringthings, managing the use of
this, they're responding to thedata.
Uh, but how do they go forwardon maintaining this forest of
data that is continually growingand uh getting weeds?
How do they how do they dealwith that on a day-to-day basis?
Frederic Miskawi (13:59):
Well, I think
number one is embrace the idea
that there is such a thing asdigital entropy.
So with digital entropy, theidea is that over time your data
will continue to reduce thelevel of accuracy that it has,
the level of usefulness that ithas, and to the point where that
data may actually becounterproductive to your
(14:20):
business goals.
So when you do that and youhave that healthy paranoia with
your systems and digitalentropy, you're putting in
layering in place to make surethat this data is being catered
to in a more automated fashion.
Uh, what we've been seeing overtime through kind of historical
anthropology of that data isthat it gets old, it gets
(14:43):
stored, it gets uh layered, uh,it may be abused, it may be
reused, and all that were youknow following human processes.
Now, these systems need uh notjust quality data, they need
data.
And the more data, the better.
And they can infer patternsbased on the data that gets
(15:04):
ingested, but you want to beable to cater to that data to
make sure that you have theability to collect the data, you
have the ability to cater tothe quality of the data in case
there is known sources of datathat is uh not conducive to your
business goals, to eliminatethe data, archive it if needed.
There's these processes, thislayering that you put in place,
(15:28):
all is there to manage digitalentropy.
So if you know and understandthat there are natural organic
slash digital processes inplace, um, that's gonna make you
understand that you have tohave healthy paranoia and that
uh you have to put these thesesolutions and layering in place
to manage the ecosystem.
Peter Warren (15:50):
I think that's a
great spot to stop, uh, Fred.
Uh thank you very much fortoday's uh conversation in both
part one and part two.
Uh thank you everyone forlistening, and uh, we will see
you again in our podcast seriesuh on the ever evolving energy
transition.
Thanks very much.
Bye bye.
Frederic Miskawi (16:05):
Thank you,
everyone.