Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome everyone to
another episode of Dynamics
Corner Brad, can AI do more thanjust put words together?
I'm your co-host, Chris.
Speaker 2 (00:11):
And this is Brad.
This episode was recorded onJuly 25th 2025.
Chris, chris, chris, can AI domore than put words together?
Ai can do a lot, and AI is allover the place these days.
I know we often focus and talkabout how AI can help you within
your ERP software, but AI canhelp you outside of the ERP
software and with us.
(00:31):
Today we had the opportunity tospeak with Brian DuBois about
AI and manufacturing.
Good afternoon, sir.
(00:59):
How are you doing hello, hello,doing well.
How?
How are you Doing very well?
Thank you, very well.
Thank you for taking the timeto speak with us.
I was just talking with Chris, Ihave two new obsessions and I
don't know how I got into theseobsessions.
Someone told me I'm a year lateon one of them.
I'm into making sourdough bread.
Speaker 3 (01:20):
Okay, yeah, you're
way behind, man, I'm way behind,
so I made my own starter thisweek.
Speaker 2 (01:27):
I think one day I
made three loaves, I don't know.
Speaker 1 (01:31):
And the funny thing
is I hey, you got to keep it
alive man, you got to keep alivethis starter.
Speaker 2 (01:35):
I do.
I have the starter, I keep itand I get scientific.
Like I measure the flour, Imeasure the water.
Speaker 1 (01:43):
I measure the water,
I stir it, I track it.
When you keep going, you stopcaring about measuring those
stuff, you just do it, man.
Speaker 2 (01:49):
No, it's going well.
And then now I'm practicing thedesigns, but I'm practicing.
I think I finally have therecipe that I like.
I don't know if it's the flour,the air or what, but this stuff
.
I fed the starter this morningand four hours later it had
already doubled in size.
It's like a ferocious flowereating.
Speaker 1 (02:08):
Yeah they're alive
man, they're alive.
Speaker 2 (02:11):
I know they're alive,
but I'm not going to feed this
thing frigging four times a day.
Speaker 1 (02:15):
You have to, no, I
can't.
Speaker 3 (02:20):
You don't have to
feed it four times a day, do you
?
Speaker 1 (02:23):
No, you kind of have
to keep an eye on it.
There's a point.
It's not really more of a time,it's more like when you see it
rise and you know you got toremove some of the stuff.
Yeah, it's when it doubles insize.
I don't know.
Speaker 2 (02:37):
Everyone says when it
doubles in size you have to
take it in half.
So this thing doubled in sizein four hours.
Like, take it in half.
So this thing doubled in sizein four hours.
Like it started, you know, youput it in, you wait 24 hours.
Well, 10 days I, you takewhatever it's seven to ten days
for it to to grow finally or tobecome alive.
Okay, and then the first day Idid it was like a little slow.
And then I'm like, dang, thisis fast.
So it's like 12 hours itdoubled.
(02:59):
And then I started baking andthen I'm doing this and now it's
like four hours is doublinglike it's a full-time job.
The um.
The other obsession that I haveis I've been messing with
raspberry pies.
I have that also.
You're like a decade behind sirlisten, I am an old man so I'm a
(03:25):
little behind the times, but atleast I'm behind the times and
I'm able to follow, because nowI am a vibe coding ai pi app
creating person.
I am creating so many things.
I bought all these like hats toput on it about the sense hat,
but the e-paper I knew nothingabout python.
I still know nothing aboutpython, but you should see the
(03:46):
stuff that I'm doing becauseyou'll pick it up.
No, I am picking it up, but aidoes everything for you, right
that's true.
Speaker 1 (03:56):
You see what?
Python is such a popular thingthat ai knows a lot about it.
You can literally yeah, likeyou said just vibe code with
Python.
Speaker 2 (04:04):
Yes, yes, I mean, I
have the sensor, I'm tracking
the temperature.
I didn't know anything.
I said okay, write something totrack the temperature from the
Sense hat.
Okay, now save it to a CSV file.
Okay, I need to display a webpage.
So I went through, installedApache on the Pi in a Docker
container.
I have a JavaScript that readsa flat file it's our csv file of
(04:28):
temperatures and goes which isai.
Speaker 3 (04:29):
And what are you?
What are the temperatures?
Oh, this is.
Is it the sourdough?
Is that the temperature?
No, I shifted.
Speaker 2 (04:34):
No, it's the
temperature I track the
temperature of temperature,humidity and pressure of my
house okay as well as track thetemperature of the system.
So I made little graphs for it,so it was all AI driven, and
with that, mr Brian.
Sir, would you mind telling usa little bit about yourself?
Speaker 3 (04:55):
Sure can.
So, brian Dubois, I'm thedirector of industrial AI for a
company called Rovisis, so weare a system integrator and we
are focused exclusively onmanufacturing and industrial
customers.
So, you know, I like to kind ofyou know I love AI, I love
talking about this stuff, but Ido like to kind of narrow the
(05:16):
scope somewhat so we can goanywhere with this conversation
about AI.
But you know, I don't knowanything about AI in fintech, I
don't know anything about AI inhealthcare, but AI in the
industrial space.
I am an expert in that, both interms of what's possible today
and how we can apply AI in theindustrial space today, but also
(05:37):
kind of where things seem to begoing in leveraging AI in
manufacturing.
So, yeah, happy to be heretoday.
Speaker 2 (05:45):
Thank you.
Thank you for taking the timeto speak to this.
Ai is showing up everywhere andAI is one of those terms that
is like oh, I know AI, or youknow AI, it's so broad.
It's just like what people usedto say is oh, you're an IT guy,
yeah, and they didn'tunderstand that IT, you know,
information technology, has somany different areas and AI.
(06:08):
As we've been going throughthis journey of talking with
individuals about AI, I've alsolearned that AI has many
different facets to it.
There's many pieces of it thatcomprise AI and AI.
As you had mentioned, it's notjust helpful with using the
tools to create emails orcreating the tools to create
emails or creating the tools to.
We can talk about generative AItowards the end.
I know that's a favorite topicof yours, so we'll jump on that.
(06:31):
But AI in the manufacturingspace can you tell us a little
bit about some of the advancesin AI, and even now, or what we
should call it, maybe within themanufacturing space?
Speaker 3 (06:44):
Yeah, and actually
what to call it is an
interesting question.
So typically when I present onthis, I talk about three
categories of AI in themanufacturing space.
The first one I have dubbedtraditional AI.
Now, it sounds kind of funny totalk about traditional AI in a
space in an industry that isreally just now starting to
adopt AI, but the reality of itis is that in this category are
(07:08):
algorithms and ML models thathave actually been around in the
manufacturing space for quite awhile 10, 15 years.
So this comprises things likeanomaly detection.
So if you guys are familiarwith that, that's where you hook
up a model to the process.
It needs no a priori knowledgeand it just starts monitoring
(07:28):
the process and it can start totell you when things go abnormal
.
Now, importantly, it can't tellyou why things are going
abnormal.
All it can say is is based oneverything I'm seeing today, it
doesn't look like it didyesterday.
So that's anomaly detection.
Speaker 2 (07:47):
When you're looking
at anomalies, are you looking at
anomalies for time, anomaliesfor output, like what are you
measuring or what is a good?
Speaker 3 (07:51):
measure.
Yeah, an anomaly detectionalgorithm can really do any of
those things, so it can look at.
It could be maintenance, so itcould be looking at the RPMs or
the current draw of a drive orsomething like that to determine
whether or not it's the same asit was yesterday.
It could be looking attemperatures, pressures of the
process, saying things are notgoing the way they did yesterday
(08:12):
.
And it can typically.
It can typically pick up onthose, those trends, faster than
a human operate, even anexperienced human operator can,
because it's looking for verynuanced correlations between a
lot of different variables.
So that's anomaly detection.
Speaker 1 (08:30):
Are these IoT devices
that kind of collects all that
data and then that's what it'sdoing it's just collecting all
this information and see ifthere's any anomalies.
Speaker 3 (08:40):
It is, yeah, and it's
interesting because five, seven
years ago there was that bigpush around IoT and you guys
remember that and everyone wastalking about IoT.
Speaker 2 (08:50):
What's IoT?
Internet of Things.
Speaker 3 (08:52):
Internet of Things
and then there was an industrial
Internet of Things, so therewas iIoT that was being marketed
to my industry, and theinteresting thing about it is
that now, fast forward fiveyears we are not seeing the
adoption of IoT like I thinkthat like everyone thought was
(09:13):
going to happen, and part of thereason why is that we already
have sensors, we already haveinstrumentation, we already have
all of these things and theyflow through what's called the
control system, which is thesystem that actually makes
everything move and work insidethe plant, and that's all been
around for 30 plus years.
So IoT was just kind of anadder on top of that.
And there were some veryspecific things where it was an
(09:34):
interesting choice to have it.
You know it's actually Brad youbrought up like humidity and
temperature.
Those are the types of thingswhere we could slap an IoT
sensor in.
It was cheaper than trying tonetwork all of that through the
control system and great.
So now we've got a couple extradata points that we can use,
but the vast majority probablyover 85% of the data coming from
(09:54):
the plant floor comes from theexisting instrumentation,
sensors and things that wealready have.
So anomaly detection oftentimesto get back to your question.
We can oftentimes just use thedata sources and the data trends
that exist on the plant floortoday and we can send that into
the anomaly detection modelwithout having to add a whole
(10:16):
lot of extra sensors andinstrumentation.
Another area where IoT got a lotof play was around predictive
maintenance.
So this was the idea that I canattach one of these IoT modules
to a drive, to any kind ofrotating equipment, and it would
determine vibration, it woulddetermine temperature, it would
you know, and so and there'sbeen some adoption of that, but
(10:39):
not the uptick that I think alot of people anticipated around
IoT.
So that's anomaly detection.
We're still talking abouttraditional AI.
Another category under thatwould be the predictive models.
So anytime you hear the wordpredictive, they're pretty much
all the same.
So predictive quality,predictive maintenance,
predictive set point the idea isthat you're going to take large
volumes of very clean, verycorrelated data, you're going to
(11:00):
send them into a model andultimately, you're going to be
able to learn how to predict asingle value.
Now, that's all it will ever beable to do, right?
So in the case of predictivequality, what's the quality of
this batch going to be before Icomplete it?
Right, in the case ofpredictive maintenance, how many
days until this piece ofequipment is going to go down
You're going to be able topredict a single value.
(11:21):
Now, importantly built intothat prediction is that someone
has to know what to do with thatprediction, right?
So if I can predict that thequality of this batch is going
to be low, it's going to be offspec.
Somebody has to know what to doto be able to fix that right.
What additives do we need toincrease the temperature, reduce
, you know, the pressure?
They have to know what to dowith that, with that information
(11:44):
.
But that's the predictivecategory and then the final
subcategory.
Under traditional AI I typicallylump in all the vision stuff.
And again, but vision's beenaround for a long time in the
industrial space.
So we've been doing vision,we've been doing object
detection for a long, long time.
Even defect detection we'vebeen doing for 15 years.
I will say that the visionsystems have advanced quite a
(12:07):
bit in the last couple years andso we can do more with it.
But the other thing that I kindof emphasize with clients that I
talk to about this is thatvision systems should really be
another source of signal.
So you should be as typicalvision system.
You should be able to get fourto ten new signals coming off of
that.
Not just I detect that there'san object here, but how many of
(12:27):
those objects and where are theyplaced, and maybe heat
signatures and things like that,an angle of a certain thing
coming through a conveyor.
Let's get all of that data andsend all of that back to the
control system so that we canmake better decisions with it.
So again, that's traditional AI, and then I can talk about the
other two here in a second,unless you guys had any
questions about that.
Speaker 2 (12:48):
I have lots of
questions with all of this
because I could see within afacility the savings that they
could have with incorporating AI, and you said a lot of this
could be used with the existingcontrols that you have on the
floor.
How does it, how would one knowor begin to go down this road
(13:14):
to see how they couldincorporate AI into their
existing structure?
And then another question I'llask.
I like to ask a lot of.
I stack on the questionsbecause I get excited about this
.
I hear you mentioned the wordmodel.
Right, this is another one ofthose.
So I hear the two biggest words.
I hear when I, when I hear uhai, is ai or a phrase, I guess
you could say and model.
(13:34):
So what is a model and how doesone know which is the most
appropriate model to use andwhere do these models come from?
Speaker 3 (13:47):
Yeah, okay, so
there's a lot there to unpack.
Yes, let's address the modelthing first.
So, a model we use these termseven I use these terms kind of
sloppily so we talk about AIvery broadly, but I don't even
know that anyone has reallycategorically said here's
everything that's in AI andhere's everything that's not in
(14:08):
AI.
Right, so you could throw inthings like decision trees and
rules-based systems.
When I was in college, many,many years ago now, I took a
course on AI and at the timeneural networks were kind of out
of vogue and at the timerules-based systems were what it
was all about.
So everything we learned aboutin this AI class was all about
(14:29):
rules-based systems and webarely talked about neural
networks.
Now, fast forward, many decadeslater, and neural networks have
come back very strong and areat the heart of a lot of these
models.
But ultimately, I guess, if Ihad to kind of very broadly look
at it, a machine learning model, an ML model, is something that
(14:51):
you can put inputs in and it'sgoing to leverage those
algorithms and then it's goingto give you some kind of output
back out.
Now, typically that's in theform of a prediction, but in the
case we've seen with GPT models.
That's going to be in the formof effectively guessing what the
next word is and completingthose phrases and sentences with
(15:11):
those next words.
In the case of autonomous AI,which is the next category of AI
that we're going to talk about,it is really doing, it's
perceiving the current state ofthe system and then it's making
a decision about the next bestmove that you can make.
So we'll talk about that herein a second.
I want to address anotherquestion you had, though, about
(15:31):
where are we getting all thisdata from and where do people
start?
So most people underestimatethe amount of data that they
already have in their plant, intheir facility.
They are typically collectingorders of magnitude more data
than they realize.
If they're not collecting anydata at all, then they've kind
of missed the boat and theymissed the messaging over the
(15:53):
last two decades.
So when I started I've beenwith Rovis's for 25 years now so
when I started my career in theearly 2000s, we were still
educating everyone as to whythey should collect this data
right, and I remember I'll neverforget I had a customer who
said well, we just throw out thedata, why would we keep this
data right?
And now, 25 years later, thatsounds kind of quaint, but that
(16:13):
was the mindset back then, likewhy are we going to spend this
money to collect this data?
What in the world would we everdo with it?
Right?
So now the good news is is thatmost manufacturing customers
drank the Kool-Aid, and in thelast two decades, they have been
collecting all of that datafrom the plant floor.
So they typically have lots andlots of data and, again,
(16:34):
everything on the plant floor,especially within the last 10,
15 years, everything on theplant floor is smart, right?
This is smart equipment, smartassets.
It can give you.
You know, when I started mycareer, we were still dealing
with very, very old equipmentthat could maybe give you 10
data points.
Now, every piece of equipmentcan give you like 100 data
points.
It can self-monitor, it cangive you all kinds of
(16:54):
information about how it'sperforming.
So we've got actually lots ofdata.
It's rare it does happensometimes, but it's rare that we
have to add moreinstrumentation or sensors.
We typically have all the datawe need to do what we want to do
with it.
Oftentimes, though, that'snecessary, but it's not
sufficient.
So we've got all that data fromthe plant floor.
We've captured it all in thesetime series databases that we
(17:17):
call historians.
When I started in this career Iknew nothing about historians,
but they are dominant in thisspace.
They're time series databasesand we use them all over the
place and we can gather thatdata from those time series
databases.
But we have to bring them into,we have to build data sets out
of them, and oftentimes we'retaking data from the plant floor
(17:39):
and we're mixing in IT data.
So think of data like from thesystems you guys typically deal
with the dynamic systems, theERP systems, your supply chain,
from the systems you guystypically deal with the dynamic
systems, the ERP systems, yoursupply chain, or mixing that
data in to build a complete dataset, a complete picture into
what it took to make thatparticular product.
Now, with that data set, now wecan start to use that to train
those ML models that we weretalking about to be able to make
predictions.
(18:00):
So I'll give you an example ofthat.
We had a customer and they dosupplements, like powder
supplements, and so they fillthese plastic tubs with powder,
right.
And so they came to us and theysaid look, we've got an issue
where we'll run.
You know, we'll fill thesecontainers right, we'll fill
(18:20):
these containers.
Speaker 2 (18:22):
Right, you mentioned
supplements.
I just have to interrupt.
Yeah, when you talk to them,can you tell them I'm interested
in hearing this but can youtell them, when they make those
containers, to not put thatlittle lip around it so that you
can't like pull the last of itout.
Because, you fill that up.
You always you could scoop itout you can scoop it out, but
then at the very end, dump itout.
Nor can you scoop it out,because the scoop that they put
(18:43):
inside is not small enough toget to the little round proper
shape see this is.
Speaker 3 (18:50):
Everybody knows what
I'm talking about.
I will feed all this back.
Speaker 1 (18:53):
Well, they should use
ai how to be more efficient
with that stuff.
Speaker 2 (18:56):
Come on, yes yes yes,
so I don't waste my supplement
because I can't get it out andoftentimes I like flip it upside
down.
Speaker 3 (19:04):
I bang it because I
try to pour it into the other
container, because I don't haveenough left and you're getting
it all over the counter and Iwasted it right.
Speaker 2 (19:11):
So if you just pass
that along, I will pass that on
I'll pass that on.
Speaker 3 (19:14):
So they're filling
these containers and they're
going in there and and they'rerunning a whole batch of the
powder and no problems feels,fills fine, same batch, same
skew, same everything.
And they start filling you knowthe next batch into the
containers and the filler jamsup.
Well, when that happens, it ishours and hours of downtime for
(19:35):
them to clear the line, clearthe fillers, get everything
reset and restart the processand they're like banging their
head against the wall.
They're like we don'tunderstand what's going on, why
these fillers are constantlyjamming Again.
All indications are that we'rerunning the exact same product
here.
So what is the problem?
So that's the kind of problemwhere all the easy solutions
have already been tried.
(19:56):
Right, they've tried all theeasy stuff.
Now they're looking to AI totry to tease apart that problem
and so.
But to be able to understandwhat the actual problems are
there and get to that root cause, we've got to look at a lot of
different data sources.
So we've got to build data sets,like I said that, pull together
data and in the end and I don'tremember exactly what it was,
I'm paraphrasing, but it wasbasically like when the raw
(20:19):
material was from this supplierand it sat in the warehouse for
this long and the humidity wasthis and this particular filler
had not been maintained in Xnumber of days or whatever.
That's when we see this right.
That's when we get this perfectstorm where the filler clogs up
.
Well, okay, but all thosethings, all those data points
(20:39):
that I just talked about you canimagine all the different data,
all the different places we'vegot to go to actually be able to
build a data set that capturesthat.
So now we can leverage datascience to actually tease that
apart and find out what the rootcauses are.
So we give them that report andthey're like, oh, thank God,
this is it, this is what weneeded.
(21:00):
But then we don't stop there.
We take it to the next leveland now we build an ML model
that we can then hook up, we canoperationalize, we can hook it
up to the plant floor so that itcan monitor when that perfect
storm is coming, so that theycan now have some early warning
as to, hey, hang on, you'regetting into that situation
where you know all the piecesare falling into place, where
(21:22):
you're going to start to getinto jams on the filler.
Speaker 1 (21:25):
You know you had made
a, you had made a comp, you
made a comment about.
You know, using there's alreadya lot of data and you can use
those existing data.
I think that's a big problemright now, at least in my
experience, where they want toput more data in but they fail
to understand what you alreadyhave.
So let's start with what youalready have and get the most
(21:48):
out of that and then, like yousaid, slowly bring in other data
points.
If you are trying to solveother things, or maybe you
realize, okay, we've used allthis data, we need a little bit
more.
How can we do that?
Or what other data points canwe add?
So that's a good call out,because I get this all the time
Like we need more data, we needmore data and, as you already
(22:09):
have plenty, let's answer thatfirst let's take that first.
It's like you're, and thenyou're causing more variables,
and then you're not like gettingthe right, the answer that you
may be looking for or the rightquestions you should be asking.
So I appreciate you callingthat out, because it's a big
problem still.
Speaker 3 (22:25):
And let me build on
that.
So you know, I typically tellcustomers I've got more data.
There's no limit to the amountof data I can give you right.
Like I said, like everything onthe plant floor is smart, I can
give you more data than I couldoverwhelm your systems with the
amount of data coming from theplant floor.
That's not actually what youwant.
So what we end up doing is westart with use cases.
(22:46):
So we're big believers herethat use cases should lead,
technology should follow right,and so we identify what the use
cases are like in the case ofthat filler problem, we keep
interrupting.
Speaker 2 (22:57):
I'm sorry but I like
that because you have to
identify the problem that you'retrying to solve before you
solve it with technology.
A lot of times, people think Ican use technology because it
exists, therefore I have to useit, versus using the right
technology to solve the problemat hand.
Speaker 1 (23:15):
Just like you, Brad,
when you're developing.
They're just like hey, can youadd this field so we can collect
this data, but it's acalculation of other areas.
Well, why do you have to createthat field?
Just calculate it from thebackend side of things if you
just want that data.
So it's pretty wild, yeah.
And then you work backwards.
Speaker 3 (23:33):
Once you've
identified that use case, like
the filler problem.
Right Now we can work backwardsand say, okay, we're going to
need information from yourwarehousing system, we're going
to need information from yourraw materials, from your ERP
system, and yeah, so then westart to build that data set.
Now the good news is is thatthese data sets typically can
answer a lot of questions.
So it's not like you'rebuilding a data set and you can
(23:55):
only ever use it for one usecase.
So typically you're identifyingtwo, three, four use cases that
all are going to use a very,very similar data set.
Now let's build that very clean, very correlated data set and
now we can start to attack thesedifferent use cases with it.
Speaker 2 (24:16):
My mind goes with
this as a million different
directions with this.
It's so practical and how itcan be helpful and to see and
also to make sure, as you said,you figure out your use cases,
which oftentimes people have ahard time identifying, right?
It's also I know what I need todo, but okay, let's just throw
technology at it, let's justthrow this at it and you know,
hope that it is solved.
(24:37):
So, wow, so now you can.
You you have the data points,which term we added?
The data points.
We determine the use cases.
Then you can utilize technology.
So you take the data sourcesfrom the different systems, from
your erp system, from yourcontrol systems, and now you
have this model that you made,or this model that exists.
How do you tell it what to do?
(25:00):
How does it know what to do?
Speaker 3 (25:03):
well and again.
So, okay, so we're still inthis realm of traditional AI,
right?
So traditional AI effectivelyjust can answer questions.
It's just going to make aprediction, so you're going to
send it inputs and it's going topredict an output.
That's all it ever is going todo and, importantly, it can only
ever predict one output, right?
So, again, in the case of thatfiller system, are we trending
(25:24):
towards that issue that we havewhere the fillers, you know,
clog up?
But you're hitting on animportant point there, and
that's that many, most many ofmy clients, when they start to
think about this, and even ifthey've got, you know, maybe an
internal AI team, data scienceteam that has started to build
some models and things like that, the problem is is that no one
(25:46):
sitting around the table knowshow to actually operationalize
those models on the plant floor,and that's one of the key
points that I make to mycustomers is is that until that
model is put in operation, untilsomebody is making decisions
based on the predictions of thatmodel, you have not seen a lick
of ROI.
Everything that went into that,right, was a big science,
(26:07):
expensive science, experiment.
Until it's actually on theplant floor and people are
taking action based on theprediction of that model, and
what that also touches on,though, is organizational change
management, because now you'retalking about trying to get
operators and supervisors andfolks who maybe have spent a lot
of time on the plant floor toand they've learned to trust
(26:30):
their ears and how the machinessound and their smell and how
things look and stuff, andyou're like forget all of that,
put all of your trust into thisAI model, right, and?
Speaker 1 (26:40):
that's a big lift,
really quick on that, because
that has always been a problem,always been, and I don't think
it'll ever go away, because weknow that there's a lot of
forecasting tool there for youknow predictable of like when to
order.
So demand forecasting right.
It's always that problem.
Anytime that I've implementeddemand forecasting, there's
always that one or two people inthe company are like well, I've
(27:03):
it for this long, I don't trustit and I'll never trust it.
And so you, you spend all thistime and money and effort.
You always gonna have that oneperson who's like I'll not trust
it, I'll make some adjustmentsand stuff like that.
But people need to give it achance to like, hey, let's,
let's, let's give it threemonths, six months or a year,
right?
And Brad and I had aconversation about AI in general
(27:24):
.
Where do you trust enough onAI's responses and results If
they make one mistake?
All of a sudden, we're like, oh, we don't trust it at all.
But if a person makes a mistake, it's okay for us.
Ah, you made a mistake, you'rehuman.
Blah, blah, blah.
You can make more mistakes.
Speaker 3 (27:44):
So it's like there
there's a bar higher for AI.
The bar is higher for AI andI'm sympathetic to those folks.
Like I really have a ton ofrespect for the folks who run
these plants and so I think thatwe have a good approach there,
because we've been, you know, asa system integrator.
We've been around for 36 yearsnow, so we've been implementing
new technology on the plantfloor for a long, long time.
So I think we've got a goodapproach there.
(28:05):
But part of that is gettingthose folks engaged right from
the beginning and making surethey feel like they're part of
the project and making sure thatthey feel like this is a
solution that they helpedimplement.
That's a key aspect of gettingover those objections and making
sure that there's buy-in fromall of them.
But to your point, chris, I'malso sympathetic to the fact
(28:28):
that, yeah, it's a higher barwith AI than it is with humans.
You're investing this money,it's new technology, and the
expectation is that it's goingto be right pretty much all the
time.
Whether or not that's fair ornot, that's just how humans are.
So we've got to kind of workwithin those bounds.
Speaker 1 (28:44):
Yeah, yeah.
But I think there's a way toget around that and we've had.
You know, brad and I haveconversations of industry
experts where you know there'sthat trust relationship you have
with someone that's human andyou understand that they can
make a mistake and all thatstuff.
But from the AI side, right nowas it is, we kind of just take
(29:05):
it for what it's giving you.
Like, I don't know how accurateit is, but if you have a little
bit of visibility of like, okay, what's the probability of the
accuracy?
Or what's the accuracy of this,if it's like 99% accurate and
it's telling you, hey, this is99% accurate, then I'll have a
little bit of trust.
But if it's coming back to say,you know, it says I'm 60%
(29:27):
accurate, okay, maybe I need toadd more data points to make it
more accurate.
Right now there's nothing,there's no system out there that
would give me that.
Currently You're just kind oflike, oh, that's the result.
There's some calculations donein the background.
Speaker 2 (29:41):
I don't know.
Know it's a mathematicalcalculation that goes to your
whole person conversation yeah,somebody comes to do service at
your house, you're going totrust that they know what
they're doing and they're goingto be able to fix the problem.
You don't know what's behindthe box.
I guess you could say Iunderstand the point of it's.
It's how do you, how does onebuild trust in anything?
(30:02):
How do you build trust indriving a vehicle?
Forget ai?
How do you build trust inwalking over anything?
How do you build trust indriving a vehicle?
Forget AI?
How do you build trust inwalking over a bridge?
How do you build trust ineverything?
And that's the question andthat's what we need to come up
with in this case.
This is, I think, a little more.
I don't want to take away fromyour time, but you're kind of
going to shoot me down adifferent path here for a minute
(30:23):
.
It's new technology.
It's scary technology to manyBecause, to be honest with you,
if you look at what you can dowith coding and again, we've had
the conversations, chris and I,with others about vibe coding
what's vibe coding?
Listen, people have been takingcode and reading samples and
doing things and putting codetogether for years, so that's
not a new concept the readingsamples and doing things and
(30:47):
putting code together for years.
So that's not a new concept.
The concept is is you can dothis a little bit quicker and
it's coming back and it's almostlike magic.
Right, it's, it's.
I tell people it's magicbecause sometimes I start typing
something and it fills outlines and lines of code that I
was just thinking about.
So I think there's a little bitof fear in that, because nobody
really understands it or knowswhat it is.
But then how do you come tohave trust?
Chris, to your point, what doyou need to have trusted?
(31:09):
Because no one's going to tellyou that something can be 100%
correct.
Right, there's too manyvariables on this planet where
you can't have something 100%correct, because even if
something's level or not level,just have a slight shift in the
earth, and now you're not leveland something will go off.
So how do you gain trust in asystem that you use?
Speaker 3 (31:31):
Yeah, and I think
it's.
The answer is it's the same.
It comes back to organizationalchange management, which has
been around for a long time,right?
So we understand how to getfolks to adopt new processes,
how to get them to adopt newsystems.
None of that is new, right?
Yes, it's a new tool with AI,but that's one of the things
(31:55):
that I feel like is part of myjob and the role that I have,
working for an independentsystem integrator, is to
demystify what AI is about,right?
So it's a new tool in ourtoolbox, yes, but it's not like
we have to throw out the wholeplaybook, the whole rule book of
how to implement new systems,new processes within an
(32:18):
organization.
We know how to do that.
As humans, we've been doingthat consultants have been doing
that for decades now, so it'sjust about leveraging that
organizational change managementto build trust in this new
system.
This new system happens to bevery capable.
It's called AI, but it is justanother system that we're
implementing.
I will say that there is oneother aspect, though, to AI and
(32:39):
researchers are working on this,but it'll be a ways out still
and that's called explainable AI, and I don't know if you guys
have looked into that much.
But you know, one of thechallenges of AI right now is it
is a black box, so it is verydifficult to understand how it
came to the answer that it cameto Right, and that's where,
chris, you were talking about.
Like, I don't know if I cantrust this answer or not.
(33:01):
How did you even get to this?
And that is something where, atleast with a human, if they
make a bad decision, you can goback with the human and you can
say well, why did you say that?
Well, I thought we were in thisstate, but it turns out we were
in this state, right?
So explainable AI.
There's a lot of research there, and what that will give you is
the ability for the AI to goback and say here's why I said
what I said oh yeah, thereasoning.
(33:22):
It's reasoning, the reasoning ofhow I got to this point.
It's actually a really hardproblem, surprisingly hard
problem for AI to do, but that'swhere they're working on so
that we can at least get that,and I think that will help with
building some of that trust Idid want to talk about.
So we talked about traditionalAI.
Let's look at the next categoryabout traditional AI.
(33:42):
Let's look at the next category.
There's three of thesecategories.
The next one is calledautonomous AI, and what
autonomous AI is is, to me, thisis the future of manufacturing.
This is where we want to get to.
Autonomous AI does, frankly,what most of my clients want AI
to be able to do, and that'sthat it makes a decision.
So it actually looks at thestate of the system and it says
here's your next best move.
(34:03):
So, unlike the predictivemodels, where it can recognize,
maybe, that there's a problem,but it doesn't know how to fix
it, autonomous AI can actuallywork its way out of a problem,
and so what it leverages is thisunderlying algorithm it's
called deep reinforcementlearning, but it's been around
now for almost a decade.
Algorithm it's called deepreinforcement learning, but it's
(34:25):
been around now for almost adecade, and it allows the
autonomous AI to make decisionslike a human can, it can
actually build long-termstrategy.
This all came out I don't knowif you guys remember around
2016,.
There was DeepMind.
It was a Google spinoff andthey built some tools to be able
to play a program, to be ableto play Go, alphago, yes.
Speaker 2 (34:44):
I remember that.
Speaker 3 (34:46):
Yes, okay, well, that
didn't go away.
I mean, it made a lot of pressback then but then it kind of
like went underground orwhatever, like that didn't go
away.
That algorithm has made hugeimpacts on a lot of different
industries, but including themanufacturing industry, so we
have actually been adopting thatand leveraging it.
So I've got DRL models that arerunning in plants right now
(35:07):
that are acting like expertoperators, and it is like magic.
It's wild to see theseautonomous AI models, these
agents, and what they're able todo and how well they're able to
perform and, in a lot of cases,outperforming even the experts
who helped train them.
So autonomous AI I'm verybullish on.
(35:29):
I do feel like that is theinflection point in this history
of manufacturing.
This is going to be the nextbig thing.
Is this autonomous AI?
I know everyone thinks it'sgoing to be generative AI and,
like I said, we can talk aboutthat here in a minute.
That's my third category isgenerative AI.
Yeah, and we'll talk about thathere in a second.
But I really believe thatautonomous AI is going to be the
(35:52):
thing that, when we lookbackwards, that's going to be
the thing that propelled usforward in this manufacturing
journey.
Speaker 1 (35:58):
I think that's the
case, because that's what the
big focus right now is theagentic AI, right.
So now there's a term where andI read this somewhere I don't
remember where where you knowback in the day, there's an app
for that, right, there's an appfor that when you got your cell
phone.
Now the idea is in the next,you know five years or the next
year it's going to be there's anagent for that, right, there's
(36:21):
an agent for that.
So it'd be the agentic AI whereit's going to be autonomous,
where it's going to do thingsfor you, removing the tedious
component of that.
I think that's great, but forme, it's long-term wise.
That's perfect to do all thosetedious work, work, but I would
love to have a little bit moreof a predictability where you
(36:43):
know, not so much generatingresponses.
It's more like I wanted topredict my life's going to be,
or if I do this, what would ithappen?
More so than just having aconversation, like I don't.
I can have a conversation withanybody.
Speaker 3 (36:59):
Well, and so you're
starting to hit on some of the
limitations of generative AI,and unfortunately so.
Forbes, I think, said 2025 isthe year of the AI agent, right,
and so there's all this focuson agentic AI and agents.
The problem is, is that theunderlying technology that
they're looking at is generativeAI?
(37:20):
Well, the big problem withgenerative AI is that and Apple
proved this once and for alllast year in a study that was
released October of last yeargenerative AI can't reason.
That's a pretty big limitationof any AI system is so it can't
reason.
It does not understand causaleffects.
(37:41):
They were giving it simple mathproblems and it could not
reason through these simple mathproblems, let alone the types
of problems like you're talkingabout, chris these big, complex
problems Think about politicalproblems, think about big legal
problems, think about these bigproblems that humans face and
generative AI struggled toreason about the simplest math
problems.
So all of the intelligence thatwe attribute to generative AI
(38:06):
is actually us just projectingintelligence onto it.
It is very, very dumb.
Generative AI is good at onething, and that's guessing what
the next word is.
It is effectively anautocomplete on steroids, and I
hate to pull the curtain backfor those folks who don't
realize that.
But that's all that it is.
And so when we the challenge isthat these agentic AI, what
(38:29):
they're doing is they're havinggenerative AI effectively
similar to writing a program layout a script of what it should
do to try to accomplish thattask, and then you can hand that
script off to something else toactually run the code, right?
Well, the problem is that, Imean, it's so limited in what it
can generate and it really canonly ever generate things that
(38:51):
it's seen in the past Like ithas to be.
It has to have seen somethinglike that.
Now it's been trained on vastvolumes of human information,
right?
So it's seen a lot, but itstill is not going to be able to
get creative, it's not going tobe able to work around certain
problems.
And then you mix into that theproblem that it has with what
are called hallucinations.
And so for your listeners whoare not familiar with
(39:13):
hallucinations, what that is isthat's where the generative AI
just makes stuff up.
It just makes it up out ofwhole cloth, and you can't tell
the difference between what'smade up.
It up out of whole cloth andyou can't tell the difference
between what's made up andwhat's real.
And I'll give you a coupleexamples of that.
I was using it the other day.
We were my daughter and I.
We were going to generate aplaylist together.
(39:34):
And so I go into ChatGPT, I'mlike generate a playlist of I
don't remember what it was beachsongs or something like that
and so it generated like 25songs.
And so we're we're starting toprogram these into a Spotify
playlist.
And we get to this one andwe're looking, we're searching
for the song.
I'm like I don't, I am notfinding this song anywhere.
So I go back to generative AI.
I go this song here, did youmake that up?
And it's like, yeah, thanks forpointing that out.
(39:56):
I actually did make that up,and this happens way more than
people realize.
A more potent example happenedin May of this year.
The Chicago Times published inone of their Sunday circulars
they had kind of a fluff piece.
It was the summer reading listfor 2025.
(40:17):
I don't know if you guys sawabout this, but it was their
summer reading list for 2025,right, this.
But it was their summer readinglist for 2025, right.
And well, someone recognizedpretty quickly and posted on
Twitter or X posted that itturns out that out of the 15
books in that list that it hadgenerated, only five of those
books actually existed.
The other 10 books werecompletely made up Now remember
(40:40):
this was not.
This was an article in the inthe Chicago Times, you know
Sunday Circular.
So of course the Chicago Timeswas embarrassed.
They went back and they didresearch into what happened
there.
They interviewed the author.
The author of the article didadmit he had used ChatGPT or
ChatGPT to generate the articleand he had not bothered to
(41:03):
double check that any of thosebooks actually existed.
His editor didn't bother tocheck that any of those books
actually existed.
So this is a real problem.
And so when people talk aboutagentic AI and they're really
stoked about it, I'm like, ohboy, that is very, very
dangerous.
To start giving this technologythat can just make stuff up and
(41:25):
go down these really weirdtangents and giving it the
ability to actually execute thatcode itself makes me very, very
nervous.
Speaker 2 (41:31):
I'll throw a twist in
this, though, just for the
conversation.
How is that different than aperson?
Because it goes back to what Iwas saying.
We talked about the trust youtalked about.
It doesn't have the ability toreason.
To be honest, with thatquestion, I question a lot of
people.
I was going to say the samething, and also you think of
(41:54):
creativity and human creativityand what humans put together.
A lot of times we put togetherthings based on what we think,
we know or what we remember ofwhat we've experienced.
So if you're saying you'refeeding off this information to
AI, is it its lack of reasoningor is it lack of being able to
experiment and gauge the results?
Because if you're coding,you're saying you can say create
(42:15):
a script, give me a script, Ican say that to Chris, and Chris
can give me something and itmay not work.
It may work, it may be whatever.
And Chris can give me somethingand it may not work.
It may work, it may be whatever.
And the only way we know is wehave to test it to make sure
that it works properly basedupon the requirements that we
were given.
So I think I'll always gosaying is AI is a tool like any
(42:35):
other tool and people need torealize that, and then, with
certain things, it may do better.
I guess you can say, and someother things it may not, but you
still just as if I'm not goingto have a bunch of people build
the jet without having somebodydo an inspection to make sure
that the jet was built properlyor put together properly, so
(42:58):
there are things that someoneshould do when it comes to AI to
make sure that whatever they'reusing is sound.
Speaker 3 (43:08):
Well, so I'll say
there's two issues there where I
think that makes it distinctfrom just going to your
assistant and saying can you dothis for me, can you book this
for me, or can you write thiscode for me To a, to a human
assistant.
I mean, there's two kinds ofdistinctions there.
One is is the perception ofperformance, and right now,
(43:29):
because we're on that you knowhype cycle, all these folks that
you're talking about, who woulduse this tool, are being told
by the media that AI is the, ishere, the future is here, it's
going to do everything you wantit to do.
We've got these agents now thatdo these amazing things, so,
and the vendors, honestly, areincentivized to say that right,
these AI vendors areincentivized to create that
(43:52):
perception.
So we've got this perception,this incorrect perception,
that's being pushed down to themasses and most people.
It's funny like I thought thiswas so well known and I'm
shocked at more and more, when Italked to people that they did
not realize I was talking to mymom this was last month and I
was talking to her about chat,gpt, and I'm like now you, you
know that it can make stuff upright.
(44:13):
She's like what do you mean itcan make stuff up.
I'm like it can just make stuffup, like it'll make facts up
and it will give the appearancethat those are correct and it
will not be correct.
It will just make stuff up andshe's like I didn't know you
could do that.
So that's a problem.
When you have the massesleveraging these tools without
understanding what thelimitations are.
(44:33):
All they get is this littlething at the bottom that says AI
can make mistakes, double checkits work.
Right, that's it.
That's all we get, not thisdeep analysis of no, it can make
really big mistakes and sayreally, really misleading things
.
That's the first problem.
The other problem and this isreally a big issue is the hubris
of the models themselves.
(44:54):
So ChatGPT will freak.
I'm a heavy user of ChatGPT so Irun up against its limitations
pretty frequently.
It will frequently tell me itcan do things it can't do.
Now I don't know, like if youhad a personal assistant that
you hired and they came inthrough the interview process
and they said, yeah, I know howto do this and I knew how to do
this and I can do this.
And I, yeah, I've done that amillion times.
(45:16):
Whatever, it's not going totake you very long to realize.
If they were just full of itand they were just, you know,
and they get in there.
And it was like fake it tillyou make it and they get in
there and they can't do any ofthose things.
You're like you don't even knowhow to use Excel.
You don't know, you don't knowhow to use any of these tools.
You said in your interviewprocess you could do it.
That's the level of hubris thatthese models have so frequently
(45:37):
.
Chatgpt and I mean this was justfrom a couple days ago.
I had an existing PowerPointpresentation it's a training I
do and I said I don't like thelayout, the flow of this
presentation.
So I fed it the presentationand I said can you help me kind
of reorganize this so it kind offlows better?
Right, and so it did.
And it gave me this niceoutline and said, okay, what if
you move this slide up here?
(45:58):
And it was great, great, okay,awesome.
Then it says would you like meto reorder that PowerPoint
presentation for you and get allthese slides?
And I can do all that for you.
I'm like, really, you can?
Okay, sure, and it justcorrupted the PowerPoint
completely.
It can't do it, but it, withall the confidence in the world,
(46:19):
said yeah, I could absolutelydo that For sure, give it to me,
I can do whatever you need meto do with it.
Right, that's a problem.
That's a problem when the AIitself doesn't understand its
own limitations.
That's how we get ourselvesinto some really bad places.
So it's twofold.
It's yes, it's the educationside.
The masses are on that hypecycle and we'll get to what is
it?
The trough of disillusionment,eventually, and people start to
(46:43):
realize the limitations.
Speaker 1 (46:49):
But the other big
problems.
The AI itself is overpromisingand underdelivering.
Over and over again.
I think, from a personal use,yes, I see that being a big
problem, but maybe and again,this is just my opinion from an
enterprise level, from abusiness level is from my, just
my opinion from an enterpriselevel.
From a business level, you canground those models to know like
here, here's your limit, justwork within these bounds of
(47:12):
these information that you'regiven so you can minimize the
risk.
I'm not saying eliminate therisk entirely, and that's also.
I mean you could do the samething with ChatGPT.
You can create a parameter oflike this is the type of what
you are, only work here in thisspace, feed it whatever you want
to feed it.
So you could do that and limitsome of those risks in creating
(47:34):
those limits for that AI model.
But it requires a little bit ofwork and it requires people to
understand that requires alittle bit of work and it
requires people to understandthat.
Unfortunately, a lot of peoplewere sold, like you said, in the
media, where it's going tosolve all your problems.
Well, that's not the case,because you still have to
understand the tool, as whatBrad mentioned.
(47:54):
You have to understand that itis a tool and a tool could be
used incorrectly.
Can they still use itAbsolutely, but you can
certainly use it incorrectly.
Speaker 3 (48:04):
And there are.
You know it's not like the LLMresearchers.
The generative AI researchersdon't know about this problem,
right?
They are actively looking intoit.
And, to your point, chris,there's a technology to ground
it.
It's called RAG retrievalaugmented generation where,
basically, it forces the GPT tocite its source.
You have to give me where, inwhatever manual or whatever you
(48:27):
read that you've got to give mea chapter and verse.
You got to point to where andyou've seen this now ChatGPT has
incorporated this where it willoccasionally give you sources,
right, so that it can give youan idea of where it found that
information.
But there are limitations toRAG.
One of the big limitations inthe enterprise setting is that
RAG relies on comprehensiveenterprise search, which is a
(48:51):
problem that we've been tryingto solve for 20 plus years now,
and nobody really has a goodhandle on enterprise search, and
RAG relies on that to be ableto find the sources of its
information.
Wag relies on that to be ableto find the sources of its
information.
So there's limitations there.
But to your other point, chris,like when you?
So let's take this now to theplant floor.
Speaker 2 (49:11):
This is why I when
you go back to, can you explain
what you mean by enterprisesearch?
Speaker 3 (49:16):
Yeah, like I'm just
saying, like enterprise search,
like trying to find, like when'sthe last time you used
enterprise search to try to findsomething on your network?
Did you find it the first timeyou searched for it?
Did you find it the second timeyou searched for it?
Did you finally give up tryingto find that document and you
just recreated the thing fromscratch?
Like Enterprise Search is oneof those really hard problems
that we really still have notsolved and we don't have good
(49:40):
answers for Enterprise Search.
People just live with the factthat enterprise search kind of
sort of works, and RAG, which isthat technology you're talking
about, where you're forcing thegenerative AI to cite its source
, relies on good, solidenterprise search.
I've seen these AI vendors'architectures and they'll lay
out the whole thing and there'sa box there and in order for it
to cite its sources, there's abox there that says enterprise
(50:02):
search.
I'm like, no, wait a minutehere.
That's not a solved problem byany stretch of the imagination.
Enterprise search is not verygood still 20 plus years later,
of us trying to solve thatproblem.
So, yes, there are things thatwe can do and we're going to
continue.
It's going to continue to getbetter.
I know that, but as of today.
So let's take it back to theplant floor day.
(50:25):
So, let's take it back to theplant floor.
This is why I'm not a bigproponent for leveraging
generative AI on the plant floor.
Yet, because of these majorlimitations and to your point,
chris, the one person who'sgoing to be using this so let's
say, you're trying to solve amaintenance problem on the plant
floor the person who's going tobe using this is the one person
who knows the least amountabout it.
Right?
Because that, obviously, ifthis was the expert, they would
(50:45):
just go fix the problem.
Right?
We're talking about and the AIvendors are selling this vision
of being able to get your leastexperienced maintenance person,
who's been there for three weeks, giving him or her a chat bot
that they can ask questionsabout how to fix this piece of
equipment, and it could justmake stuff up.
It'll just make stuff up.
It'll just make stuff up.
It'll say whether or not itknows how to fix that particular
(51:07):
piece of equipment.
It'll say, yeah, I know exactlyhow to fix that.
Here's what you're going to doYou're going to torque this bolt
and you're going to rev thisthing and you're going to add
this additive in and you couldblow up the plane.
You could kill somebody.
Speaker 1 (51:16):
Yeah, that I always
share about using AI.
It's like having an autopiloton flying a plane, right?
So you trust it that it's goingto take you from one
destination to another and itwill adjust accordingly and all
that stuff.
But if that doesn't work, you,as a pilot, should know how to
(51:40):
fly it manually.
You should still know how tolearn to do that.
No different than a developerthat uses a AI to help develop a
software, you can use AI to getall the stuff, but you should
still understand what it's doing.
That should be the initialapproach of any AI uses in your
(52:00):
business.
That should be a core.
So that's an importantcomponent.
Speaker 3 (52:06):
And that autonomous
AI that I was talking about.
That's like building anautopilot.
It's like building an autopilotfor that part of the process
and it will look over yourshoulder and make
recommendations and say youshould do this, you should do
this, you should make thischange.
But, importantly, like a realautopilot, a real autopilot will
kick off when it recognizesthat it's outside of its
operating parameters.
Autonomous AI, unlikegenerative AI autonomous AI can
(52:30):
say I wasn't trained to do this,you're back in control, I'm
kicking off.
You need to take back controlof the process so it can hand
that operations back to theoperator when it recognizes it's
over its skis.
Generative AI doesn't it just?
Speaker 1 (52:47):
makes stuff up, so
from your world.
Then, brian, when you'reworking on the manufacturing
warehouse, you know machinelearning and IoT, things like
that.
What specific model or AI modelare you using?
If you can share that, I don'tknow if you could if it's a
(53:12):
proprietary thing.
Speaker 3 (53:13):
No, no, no, yeah,
it's not at all.
So it's a frequent question Iget and it's a misunderstanding
of how we're approaching theseproblems.
We have no a priori models thatwe're bringing to the table.
We are building these modelsfor each customer, and there's a
couple of reasons why we dothat.
A every customer's data isdifferent, every customer's
equipment is different, theirprocesses are all different, so
it's really kind of impossiblefor us to build this library of
models that are going to applyin all these different
(53:34):
situations, right?
So A it's not even possiblereally, anyway.
So we're always starting fromthe data, building those data
sets and then generating modelsoff of them.
As far as what the underlyingalgorithm is underneath the
model, honestly, in this day andage, you don't really have to
even worry about that, whetheror not it's a decision tree
under the covers, or it uses aneural network, or I mean
(53:55):
there's all kinds of differentdeep forest search.
I mean there's all kinds ofcrazy different algorithms under
the covers.
You don't even have to worryabout that.
The systems themselves thattrain these models already will
pick the right one.
They'll run tests against eachother and find the one that's
the most performant.
All of that happensautomatically under the covers.
So now we've got a trainedmodel that, again, black box
(54:17):
inputs come in and then it'sgoing to make predictions coming
out of it.
Or, in the case of autonomousAI, perceptions come in,
decisions come out, coming outof it.
Or, in the case of autonomousAI, perceptions come in,
decisions come out.
The other side of it.
The other aspect of that, though, is that one of the most common
questions I get is everyoneasks are you going to use my
data to make your models betterand then hand it over to one of
(54:39):
my competitors, right?
So?
And the answer to that is no,like we're starting from scratch
.
We're building these models foryou, mr Customer.
You get to keep that model whenit's all done.
You have all the IP.
We don't ever leverage any ofyour data to train other models.
Now, hilariously, then, theytypically follow up and ask that
question that you just asked,chris, like do you have a bunch
(54:59):
of models?
Then Are you going to have tostart?
I'm like no, I don't have abunch of models.
It's the same rules for you asfor everyone else.
You don't want me taking yourmodel and giving it to your
competitors.
It's the same thing back to you.
So, no, we don't do anythinglike that.
We always start from scratchand we build these models from
scratch for each customer andthat's not a huge lift.
It's not as big of a lift as itsounds.
(55:19):
In fact, the data science to beable to get that data in a
state.
Typically on these projects,the data science is typically 75
, 80% of the effort is thecleansing the data.
It's the unsexy part of AI,right?
You're cleansing the data,you're getting rid of bad data,
you're eliminating rows withempty cells in them and things
(55:40):
like that.
So, getting all of that dataright, that's where the effort
is Actually.
Training the model doesn't takethat long at all.
Speaker 1 (55:47):
So you're running.
So is this kind of a big dataworld then?
So if you're pulling all thisdata, it's a big data, yeah, so
what database does it go into,is it?
And certainly I hope it's notSQL.
Speaker 3 (56:01):
Well, so you know, I
don't know how familiar you guys
are with the world of datascience, so it's a pretty
established world right now andso they have tools that they're
using already and typically, yes, you're working with hundreds
of thousands, maybe evenmillions of rows, but you're
typically working with it inPython types of environments.
You're typically working withit in what are called Jupyter
(56:24):
Notebooks.
So these are established tools.
You're using pandas, You'reusing some of these established
data science tools and then,yeah, I mean you can certainly
leverage.
It is a big data type problem.
So there's tools like MicrosoftFabric and there's Databricks
and there's Snowflake andthere's some tools behind the
scenes that can help when you'reworking with that volume of
(56:44):
data.
The other thing that's reallyinteresting that most people do
not talk about and I like theDatabricks and, by extension,
the Microsoft Fabric, because itleverages the Delta format, but
I like their approach to it andthat is versioning these
datasets.
So, just like you, version code, when we're building these
datasets, we build them, we testthem, we do some preliminary
(57:06):
modeling, we run some algorithmson them to determine are they
predictive, Are there gaps inthe data?
We run some heuristics andthings like that, and then we'll
go and we'll modify, we'll dosome more data science and we'll
modify that dataset, thathistory, that evolution of that
set over time.
(57:26):
You want to version that, right.
You don't want to get back tothe old days, like we used to do
with versioning source code,where you're like putting the
date and then the time of thatsource code file and you know,
as you're changing that sourcecode file, you know you're
losing track of which one waswhich.
Like.
You want to version that dataset, and so tools like
(57:47):
Databricks, tools like Fabric,have a really nice method of
actually versioning those datasets over time as they evolve.
And that's what you want to do,Because if you find out that
you introduced bad data or yousomehow, you know, in your
effort to cleanse this data, youscrewed up the data, you need
to be able to go back a coupleof versions and say, oh, here's
what we did wrong and then beable to play forward from there.
(58:07):
So this again, this is all.
We haven't even gotten to AI,right?
This is just all the datascience stuff that it takes to
get to a good ML model.
Speaker 2 (58:17):
See, this is what I
was saying is people think of AI
.
They just think of like justsaid is oh, write me an email to
say I'm sorry for missing yourparty on Friday and it will
generate a response for you Anicer response.
Speaker 1 (58:33):
They have not gone to
your stupid party Sometimes.
Speaker 2 (58:36):
Listen, I've been
doing this for a while.
I told you I've moved to useGrok.
I think Grok is my friend now,and AI with Grammarly have
seemed to solve a lot of myproblems.
Speaker 3 (58:49):
Well, and this is the
blessing and the curse for me
of generative AI, right?
So we started this industrialAI division in 2019.
Now, no one had even heard ofLLMs in 2019, right.
And so we were doing this AIstuff with customers and we were
talking about building big datasets and building models and
whatever.
Then, november of 2022,suddenly the world gets
introduced to ChatGPT, right.
(59:11):
And now my phone's ringing offthe hook because everyone wants
to talk about AI, but it's thewrong AI.
They want to talk about thisgenerative AI, and I'm like
that's great, but let's talkabout AI that we can actually
implement today on the plantfloor and have a big impact.
So it's my blessing and mycurse it gets me in the door,
but then I typically pivotpretty quickly to other types of
(59:32):
AI.
Speaker 2 (59:33):
Yeah, there's a lot
to it.
One of the things that I don'twant to take away from the topic
that we're having, but I thinkof this and I think from the
business point of view I don'twant to take away from the topic
that we're having, but I thinkof this and I think from the
(59:53):
business point of view, when wedeal with AI and it's even
myself it's sometimes is AI cando so much.
As Chris started talking aboutthe agents, you talked about
autonomous.
You talked about the predictive.
We talked about all thesedifferent things and sometimes
how does someone come up withwhere they can apply this?
It's more of so.
What are some things thatsomebody could do to make their
business a little more efficient?
We talked about it in.
(01:00:14):
You talked about industrialmanufacturing, manufacturing.
You have the tools to help that.
They can see predictivemaintenance.
The other example that you hadwas a really good example.
You could see predictivefailures of output for due to
external conditions and such.
But how does, how does thatjourney even begin?
Because there's so much to this.
(01:00:37):
To be honest, it soundsoverwhelming to me, and it's not
this conversation, it's justall this.
Oh, you can use agents that cando all this, and then you have
an agent that manages all theother agents and it's like what
do I do?
Speaker 3 (01:00:52):
Yeah Well, so we'll
bring it full circle back to the
beginning of our conversation.
We talked about use cases, soit's still the same answer Use
cases.
Lead, technology follows, soit's still the same thing with
AI.
So what we do is we typicallysit down with the customer and
we talk about where can theyleverage AI and what are the use
cases that we can use.
Now, it's kind of a chicken andegg thing, because they don't
(01:01:12):
always know what the state ofthe art is.
So we prime the pump, we teachthem just enough to be dangerous
about AI.
We typically it's a couple hourlong workshop that we do, and
then now okay, now they canstart to.
Now the wheels are turning andthey're starting to say, oh, I
get where that could apply, andwe're showing them use cases
(01:01:34):
that other customers have done.
Here's where other customershave found success with AI.
So now they're starting to say,okay, now I see how that could
apply here.
And so they start saying, youknow, what about if we used AI
here?
What about if we use AI there?
And then they're sitting downwith experts you know in this,
and so we can very quickly say,yes, that would be a perfect use
of AI, or that's more likeSkynet, and we're not there.
(01:01:57):
So we can very quickly, youknow, separate the wheat from
the chaff, and we can get tohere's some you know, here's 5,
10, 15, really high valueprojects that we could tackle
right now leveraging AI thatexists today.
That would have a huge impact.
And then we work with them tofigure out what's the ROI of
(01:02:18):
those projects, right, and sothen we and then, of course, we
got to get justification, allthat, and so finally, then we
actually can attack that projectand we can make it a success.
And then it's just rinse andrepeat.
You grab the next project offthe list and you, you go through
that process again.
So that's, you know.
One of the things that I alwaystell you know customers this is
(01:02:38):
actually how I wrap up when I'mspeaking at trade shows is take
that first step right.
I know that not everyone feelslike they're ready to take on an
AI project, but whether or notyou feel like you're ready or
not, one of your competitors isprobably taking that first step.
So you need to take that firststep and even if you don't want
to go full AI, then at least dosome AI readiness right.
(01:03:01):
How does your infrastructurelook?
How's your data infrastructure?
How's your networkinginfrastructure look?
What's your infrastructure look?
How's your data infrastructure?
How's your networkinginfrastructure look?
What's your data story?
Do you have all of these thingsin place so that when your
organization is ready to take onAI, that you've got all the
building blocks ready?
So even if you just want totake on AI readiness, that's
fine, but take that first step.
But the second thing I tell them, and I said I always say I know
(01:03:21):
it sounds self-serving, but getan independent system
integrator involved, because itis complicated and there are so
many ways to skin this cat andthere's so many different paths
you can go down.
And if you just go to the AIvendor, they have a hammer and
everything's got to look like anail right, so they've got one
tool that they can use to solveevery problem.
(01:03:42):
So, whereas with an independentSI like I've got all the tools
in the world right, andsometimes we have these
conversations and we identify ause case and it turns out like
maybe that's not even really anAI use case.
Maybe we can solve that withjust some visualization and some
existing analytics off theshelf analytics without having
to go full AI to solve thatproblem.
(01:04:03):
So we can go anywhere where theconversation needs to.
We can go anywhere where thoseuse cases lead us.
And so, yeah, have that initialconversation with someone who's
an independent expert in thisfield and have them help you
build a roadmap on how to get towhere you wanna go.
The other thing is is that oncewe've built that roadmap, now we
can start to bring vendors in.
(01:04:23):
And now we can start to,because ultimately it is gonna
be running on some platform.
So now we can start to bringvendors in and look at what the
options are for tools like thatthat can solve that problem.
And in our world we've gottools like there's a company
called Cognite and so they do IT, ot, building those data sets
(01:04:46):
of combining IT and OT datatogether.
We've got tools like Composablethat does autonomous AI.
That you know, that's all theydo Autonomous AI in the
industrial world right Now.
These are not vendors thatyou're going to stumble on, but
these are all vendors in mytoolbox that I can pull from to
be able to.
You know, but maybe it's youknow, we work with Rockwell,
which is a big player in thisspace.
(01:05:06):
We work with Rockwell, which isa big player in this space.
We work with folks likeIgnition in this space.
So we work with all these majorvendors in this OT space, this
operational technology space,the world we live in.
We work with all thesedifferent vendors and we can
bring them to the table andbuild those solutions as
necessary.
Speaker 2 (01:05:21):
That's good, because
it does sound overwhelming, and
the perception that many mayhave is that it sounds
complicated.
Some people think it soundseasy, I can just do it.
This is where it's even withwhat we deal with with the
implementations and working withsoftware implementations.
Everybody always thinks it's soeasy, I can do it, but then you
(01:05:42):
don't know what you don't know.
You don't know what you don'tknow, and then sometimes you
find yourself, you put yourselfinto a position where maybe you
didn't make the decision becauseyou don't have that experienced
reasoning to understand thecause and effect of what you're
doing as well.
So what are some other from themanufacturing point of view,
what are some other efficienciesthat you've seen that
(01:06:05):
organizations have gained byimplementing AI in their
organization?
Speaker 3 (01:06:11):
Yeah, so, and you
know it's as varied as the
customers that we work with, andso I'll give you a couple
examples.
We worked with a life sciencemanufacturer and they had
certain published sustainabilitygoals about how they were
trying to reduce theirgreenhouse gas emissions right
(01:06:33):
by X percentage.
By this, you know target year,and you know that's one of those
things where I don't.
When people talk aboutsustainability, I don't know
what other technology you'regoing to grab other than AI to
meet those goals.
Everyone's got these reallyaggressive sustainability goals.
It's not like there's sometechnology that's in the wings
that's about to come in andrevolutionize energy.
(01:06:55):
We know about all thetechnologies that exist here.
So AI is the technology that wecan leverage to hit those
sustainability goals.
So they had these publishedsustainability goals.
And so in life science ifyou're not familiar the
environmental systems, obviouslyyou're trying to control very
tightly.
You're trying to control forhumidity and pressure,
(01:07:18):
temperature, of course, butthose systems are oftentimes set
in and forget it.
So it's nobody's full-time jobto sit there and turn the knobs
and try to, you know, keep itwithin spec, but then also to
minimize energy usage.
So we trained an autonomous AIagent to actually be able to do
that.
So it can actually and it doesnot do set and forget, it
(01:07:38):
actually sits there and makesmicro adjustments 24 hours a day
to try to.
So it's always working to keep.
You know the constraints areit's got to stay in spec, but
it's always working then tominimize energy usage and it can
make intelligent decisionsbased on past demand.
It can look at all kinds ofsignals, past demand.
It can look at outsidetemperature, outside humidity
(01:08:00):
and pressure.
It can look at what the cost ofenergy is at that moment.
It can, it can do leverage, canleverage all of those signals
in the exact same way that afull-time human could if that
was their job.
It can do that and control that.
They were able to getdouble-digit percentage
decreases in energy usage, whichis more than enough to get them
(01:08:22):
to their sustainability goalsthat they were trying to get to.
That was really just one site.
We're trying now to work withthe customer to expand that out
to many, many more sites Nowfrom energy, same technologies.
But let's go to.
We've got a customer that makesglass bottles and the glass
bottle process.
If you're not familiar with itthere's a gob of molten glass,
(01:08:44):
it falls into a mold, air isblown in and then that's how you
make glass bottles.
Well, that process is veryfinicky, so it takes an operator
with a light touch.
They've got to be kind ofreally tuned in and really the
customer told us we've got twoexpert operators who are really
good at this and everyone elseis just okay at operating this
(01:09:05):
process and it's a very driftyprocess.
So once it starts drifting,you're making bad bottles maybe
for at least the next 20 minutes, maybe half hour, before you
can finally get the knobs turnedto bring it back to where
you're making on-spec bottlesagain.
So we were able to train anautonomous AI agent to get back
to making on-spec bottles inless than five minutes.
(01:09:27):
Consistently it never tooklonger than five minutes and
typically it took like twominutes to get back to making
on-spec bottles.
And one of the challenges withthis and why autonomous AI works
so well for this, is there's alot of compensating moves.
So if I increase what's calledthe orifice the size that the
molten glass goes through, if Iincrease the orifice size or I
(01:09:47):
decrease the temperature on theorifice the size that the molten
glass goes through, if Iincrease the orifice size or I
decrease the temperature on theorifice or I increase the
pressure on the plunger behindit.
When I'm making these moves,I've got to make a compensating
move somewhere else, right?
So there's just a lot of knobsto turn, and it's one of those
things where it's almost toomuch for a human to try to keep
track of all that.
So what humans end up doing isthey do a lot of test and check,
(01:10:07):
right, so they'll make a changeand then they'll see if it
makes an improvement, thenthey'll make another change.
The autonomous AI doesn't dothat.
It's got in mind where it'sgoing to go and it can turn all
of those knobs all at the sametime to coalesce to making
on-spec bottles again.
And so that's another example.
You know, when you look atbeing able to get from making
(01:10:30):
on-spec bottles, getting back tomaking on-spec bottles in less
than five minutes versus 20, 30minutes, I mean that's huge
savings, that's huge throughput,that's millions of dollars.
Speaker 2 (01:10:38):
You eliminate waste
as well.
It all comes down to, I thinkAI can help you increase
accuracy to eliminate waste.
And that autonomous AI ofturning all those knobs sounds
like a person to me.
And almost in some cases Iwonder if they could do it more
reliably than a person.
I think.
Speaker 3 (01:10:58):
Well, I mean, it
never calls off, it never takes
a break, it never goes onvacation.
But the reality of it is isthat, with most of these, we're
not building this to replace aperson.
The it is is that, with most ofthese, we're not building this
to replace a person.
The problem is is that in themanufacturing world, they have
had massive losses of expertise.
So as the baby boomers retire,they have lost decades and
(01:11:21):
decades of experience.
There was a report from LNSResearch in 2019.
This is for US In 2019, themanufacturing workforce the
average years, the averagetenure in a certain position in
the manufacturing workforce was20 years in that position.
By 2023, it dropped to threeyears.
Three years tenure Like that's.
(01:11:41):
That's insane.
And so and when I talk to myclients, they're all seeing this
.
They're saying I've got highturnover, it's a generational
thing.
Like nobody wants to do thesejobs, nobody wants to work these
factory jobs, and we can wishthat it wasn't the case and we
can certainly hope that itchanges or we can just live with
the fact that this is thereality that my clients are
facing, and so they're notlooking to replace people,
(01:12:03):
they're looking to try to getthat person with two weeks of
training who's probably going toquit in six months, to at least
get them to where they can makegood bottles.
Speaker 2 (01:12:11):
Yes, see, this is.
This goes back to something Ialways say and I've been saying
AI is not going to replace you.
Someone using AI will.
That's right, because that's itis.
It's it's another tool thatyou're using.
And if you look at theindustrial revolution, the
advances in civilization, we'vealways done things to make it
(01:12:34):
easier for people to do more, ina sense.
So again you go back to wheresomeone may not want to do those
specific positions or there maynot be enough within the talent
pool for those positions.
If you can have some somethingreliable to help, then you can
still continue to prosper, besuccessful.
Speaker 3 (01:12:53):
And that's how we
stay competitive.
And when I say we US, you knowI'm a US citizen.
I was born and raised in Ohio.
Love this country, lovemanufacturing.
This is how we stay competitive.
This is how we stay ahead isleveraging these technologies to
look over the shoulder of ourleast experienced operators and
get them to where they can runthese lines in an expert fashion
(01:13:15):
.
That's how we're going to do it.
And AI it's not the only way todo it, but it is one mechanism
that we're looking at to try todo that.
And again, in the role that I'min, director of industrial AI,
obviously it's one of the mostcommon questions I get Are you
putting people out of work?
And again, the answer is no.
My clients don't have enoughpeople to do this.
(01:13:36):
But there is a certain aspect ofembracing automation and
realizing that the jobs aren'tgoing.
Certain job titles may go away,but it's just gonna create new
job titles in the future.
And you know, I got a coupleexamples of that.
You know, lamplighter was a job.
Someone's full-time job was tolight the lamps in the town at
the end of the day.
(01:13:57):
And you know, of course, withEdison and the adoption of
electricity, that went away.
Elevator operator was a job.
That was a real job, wheresomeone went to work every day
and they were an elevatoroperator and no one's lamenting
the loss of elevator operating.
We're not walking down thestreet and seeing homeless
people elevator operatorssitting there on the streets
(01:14:18):
because they're out of work.
The jobs change, and so that'swhat's happening and that's
what's going to continue tohappen, and so that's what's
happening and that's what'sgoing to continue to happen, as
my clients, as thesemanufacturers continue to, you
know, find challenges in findingthis talent and they can't find
these folks.
As that continues to happen,the jobs will change.
They'll start to adopt moreautomation, and these are, I
(01:14:41):
mean, you know, look, you know,I've been out in these plants
for my whole career.
They're not the sexiest jobs.
Some of these plants are verydirty, you know.
They're not the jobs.
They're very repetitive.
They're not the jobs thathumans want to do anyway.
So let's get those humans intojobs that are rewarding, that
are jobs they're excited to cometo every day, and let's let AI
(01:15:01):
do some of these jobs that are,that are menial and that are
dangerous, like that's the otherthing.
Speaker 1 (01:15:06):
There's a lot of
these jobs that are very
dangerous, that we don't wanthumans doing in the future
anyway, yeah, for sure, and Ithink a lot of the AI is filling
those gaps where you arelacking the skill.
So, you know, fill those in.
So, like you're right, you haveto have a positive outlook of
the tools that we're creatingand it's just like Brad
(01:15:30):
mentioned it's part of humancivilization, of improving our
lives and it doesn't mean it'sgoing to eliminate a person's
value.
That value can shift to otherareas that are much more
productive and maybe morecreative and more strategy
around that, and then not worryabout those, like you said,
(01:15:50):
dangerous, tedious work and havea system do that for you or
have a robot do that for you.
You know we'll get just likethe horse right.
Not the horse is used for someother things.
Speaker 3 (01:16:02):
Yeah, and that's been
the story of industrial
progress.
So we're not.
Yes, it was not new, that's whyI keep going.
Coming back to ai is not new inthat regard.
It's just the next tool, it'sthe next evolution, it's the
next step, uh and and.
But this has been the story,you know, as we continue to move
from manual processes where youremember, you've seen the old
pictures during the industrialevolution of people lined up and
(01:16:24):
working with their hands andmaking stuff, to where we are
now, where all of that is donewith a machine the loom.
Speaker 2 (01:16:30):
I can think of so
many jobs that they had children
in the mills.
They used to run the looms,they used to run the yarn
quickly through the wires, yeah,and they used to get injured
and hurt, oh my gosh.
So there are some benefits, Ithink, with AI.
I think more of it's themystical, magical black box and
(01:16:53):
the magic that it does.
When you're sitting therecreating code or you're doing
some processes, it just seems toknow what you're thinking.
So I think some of theapprehension is the fear of it,
and I think you've had that withany tech, the adoption of it,
and I think you've had that withany advancement that was made.
You know, you look back to theautomobile, some of these larger
(01:17:14):
advancements and even someother tools that were created.
There's so much to this it's Icould talk about AI for hours
and days, days and hours, hoursand days.
I don't even know anymore.
Speaker 1 (01:17:30):
Now you have more
time, brad, to talk about it,
because AI will do the rest ofyour tedious work.
Speaker 2 (01:17:34):
I have so many things
that I wish AI could do for me.
I just need to figure out howto apply it to get it done.
And the one thing I keep sayingI'm still looking for the
ability to manage multiplecalendars in one place easily,
without having to pulleverything into one calendar.
Yeah, yeah, that's such asimple thing too.
Speaker 3 (01:17:58):
You talked about
enterprise search.
Speaker 2 (01:18:00):
I was thinking.
The first thing that came to mymind was do you know how
difficult it is to find an email?
Oh my God, yeah, yeah, and yougo back to.
We have all these tools and allthese wonderful things and it's
.
How do I find this email?
And I'm with you on that,that's something so simple.
With my own inbox or not myinbox, but my old email box that
is very difficult for me tofind an email without having the
(01:18:23):
exact match that I'm lookingfor an email without having the
exact match that I'm looking for.
Speaker 1 (01:18:32):
But you know, I got
to tell you I think that's one
of the use case of how peoplecan get into utilizing these
tools, because I get that allthe time Like I'm coming up to a
meeting and I don't rememberthe conversation or maybe don't
have an idea of what the meetingwas about.
Maybe I got pulled in because Igot to make a decision was
about.
Maybe I got pulled in because Igot to make a decision.
So, and I asked AI, I askedco-pilot for the Microsoft
product where, like, hey, I'mcoming up to this meeting, I'm
(01:18:52):
going to be talking to thesepeople and here's a topic that
we're going to talk about.
Give me everything I need toknow and all the communication
about this, and it does awonderful job.
So I come in and I don't lookvery you know, I don't look like
I was not organized co-pilotorganized for me, and it saves
you time, right, even as simpleas that Start with that?
Speaker 3 (01:19:12):
Oh for sure, yeah,
and you know.
Back to this idea of time, youknow one of the things that we
will see with there's a lot somuch fear and uncertainty and
doubt around AI, One of thethings that we're going to see
is an increase in leisure time,and one of the things like we
just take for granted thatthere's a 40-hour work week, but
(01:19:32):
that was not always the case,right?
I mean, the reason why we'reable to have a 40-hour work week
is because of the advancementsin technology and automation
that made that possible, whereyou could be more productive
with less actual time.
We're going to continue to seethat, and AI is going to
accelerate that.
(01:19:53):
So we're already starting tosee some rumblings in Europe
about moving to a 32-hourworkweek.
I'm all for it.
That's important.
That leisure time.
That's what makes lifeworthwhile, when you can spend
that time with your friends andyour family and be creative and
pursue hobbies and pursue thingsthat you're passionate about
(01:20:13):
and not just spend your entirelife working.
So I am all for that and I'mready for that, and AI is going
to be one of those tools that'sgoing to bring that.
Speaker 2 (01:20:20):
It will be helpful to
go with that, just to add to it
a little bit more.
We've spoken about it before.
We need to change the timevalue that we have.
We need to value productivityand output, not time, because
some are fearful.
Well, with AI, I could dosomething twice as fast if I
have to do twice as much wherewe have to come up with a fair
(01:20:42):
way to measure productivity andoutput.
To get back to where you weretalking about, to where maybe
you don't have to work the 40,50, 60, 70 hours a week with 32
hours of solid time is enough.
And then also, I've read studies.
I'm not a scientist, doctor orany of those, but I read.
I do a lot of reading where thein it.
(01:21:03):
But I've also experienced itmyself where sometimes you put
something down, forget about it,forget about it for a little
while.
You come back, you're morecreative, you're more energized
and you're more productive.
I worked at a place that theyused to force you to go out for
lunch, and the reason why isbecause the owner of the place
and he would buy people lunchand stuff.
He'd want you to go outsidebecause he said that he realized
that the individuals were moreproductive if they got up from
the desk and didn't continueworking through lunch.
(01:21:24):
It wasn't forcing people to goout to eat.
His concept was is we want youto take a break during the day,
go out, walk around the building, do something so that you're
not sitting at your desk all dayand you can be a little more
productive?
And he was right.
A little fresh air did somewonders, because you come back
after lunch and you don't havethat afternoon need for a nap.
I guess you could say so.
That afternoon need for a nap,I guess you could say so.
(01:21:47):
Brian, we appreciate you takingthe time to speak with us today.
As I always say, time truly isthe currency of life.
Once you spend it, you can'tget back.
So anybody who spends time withus, we greatly appreciate.
We enjoy hearing your insights.
I'd love to maybe talk to youagain in the future, get a
little bit deeper in some ofthese areas.
But if someone would like totalk with you more about AI and
(01:22:08):
learn a little bit more aboutwhat you do to help
manufacturing organizations gainsome efficiency from AI, what's
the best way to contact you?
Speaker 3 (01:22:13):
Yeah, so easiest way
is to go on my LinkedIn, so
you'll find me there BrianB-R-Y-A-N DuBois, D-E-B-O-I-S.
If you just search for that onLinkedIn.
Yeah, and reach out to me.
Or the Rovisis website is agood way to get in contact with
me as well.
It's Rovisis R-O-V.
As in Victor I-S-Y-S dot com,slash A-I.
We'll take you right to mylanding page on the website.
(01:22:35):
But, yeah, happy to talk toanyone about this.
Please reach out and, yeah,appreciate the time.
Speaker 2 (01:22:39):
Great, thank you.
Look forward to talking withyou soon All right, sounds good,
thanks guys.
Thank you, chris, for your timefor another episode of In the
Dynamics Corner Chair, and thankyou to our guests for
participating.
Speaker 1 (01:22:52):
Thank you, brad, for
your time.
It is a wonderful episode ofDynamics Corner Chair.
I would also like to thank ourguests for joining us.
Thank you for all of ourlisteners tuning in as well.
You can find Brad atdeveloperlifecom, that is
D-V-L-P-R-L-I-F-E dot com, andyou can interact with them via
(01:23:16):
Twitter D-V-L-P-R-L-I-F-E.
You can also find me atmatalinoio, m-a-t-a-l-i-n-o dot
I-O, and my Twitter handle ismatalino16.
And you can see those linksdown below in the show notes.
Again, thank you everyone.
(01:23:37):
Thank you and take care.