Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Meredith Gregory (00:00):
It is the tool
that will automate the tasks
that nobody wants to do.
It's that task that someone'slike, if I never had to do this
again, I would be thrilled.
And that is where artificialintelligence absolutely shines.
Caleb Ayers (00:23):
Welcome to another
episode of Inside ILR.
Thanks for being here today.
Thanks for joining us.
So, first, I'm gonna letMeredith introduce herself and
tell us how this podcast helpedher get here today.
We'll start with that and thenwe'll get on to what we're gonna
talk about.
Meredith Gregory (00:37):
So before I
came to ILR, I was in Northern
Virginia.
Um, I was a governmentcontractor as a working as a
program manager, leading somepretty advanced uh AI algorithm
development work.
That said, I have familyconnections to Southern Virginia
and I've always cared a lotabout the region and generally
pay attention to what's going onuh down in Southern Virginia.
(00:58):
And in social media scrollingas you do, came across the
institute and started digginginto a little bit more.
Everyone says, that's inDanville.
You see, you see the building,right?
And you're like, that's inDanville.
And uh so dug into a little bitmore and found the podcast.
And as I would drive back andforth from Northern Virginia to
(01:20):
uh my family's farm in southernVirginia, I would listen.
And every time I'd be more andmore like, that's so cool,
that's just fascinating.
How does this work?
Uh and over time I had someideas about just general
connections and didn't know whoto reach out to.
So I reached out to Calebfiguring you were well
connected, and you connected mewith Jason, and a couple months
later here we are having aconversation.
Caleb Ayers (01:42):
So that is our
number one podcast success
story.
Daniel, do we have anythingbetter?
We don't have anything betterthan that, right?
Okay.
Um that's number one.
So yeah, you are the directorof digital strategy and program
management, been here for what,six, eight months at this point?
Meredith Gregory (01:57):
February.
Yeah.
Caleb Ayers (01:58):
Okay, yeah.
So what I want to talk to youtoday, um talk to you about
today is kind of artificialintelligence, specifically in
the manufacturing world.
I know that's kind of your abig part of your role is how do
you how do you incorporate thosetwo things?
Real big picture.
Can you define AI for us?
That is a broad question thatyou can take whatever direction
you want.
Meredith Gregory (02:17):
That sounds
good.
Yeah, that's the million-dollarquestion everyone tries to
describe it.
My my best way to describe itis through examples at the end
of the day.
So um artificial intelligence,I think about intelligence, and
then you add the artificialpiece to it.
You think about um machinelearning, is typically about
artificial intelligence andmachine learning, uh, same
thing.
You talk about learning and howhow do you learn, and then you
(02:37):
add the machine component to it.
And so we're replicating um howhumans think and how humans
process information and data andlearn.
Um, and we've just added acomputer system behind that to
help automate it at the end ofthe day.
Um, so if we talk about somedifferent use cases, your large
language models think chat GPTis by far the most common use
(02:59):
case today, um, really coming onthe scene in about 2023.
The math that derive thatdrives all of this artificial
intelligence, your largelanguage models, plus all of the
other applications, um, is 17thcentury math.
This has been around forcenturies at this point, the
concept.
Uh, what really launched itinto modern day expansion and
(03:23):
just the the gold rush, as theycall it, that we've seen, um, is
this thing called it's compute,which is your graphical
processing units, um, which isessentially the the fuel.
So that's where on your laptopyou're running a CPU.
Uh graphical processing unitsor GPUs are much, much um more
robust and can processinformation far quicker and far
(03:47):
more uh far more complexcalculations.
So those come on the sceneabout, I don't know, 15 years
ago, something like that, andit's just completely accelerated
from there.
So I got into the space eightyears ago, which is far before
2023.
Uh so I always try and explainuse cases that are not just
large language models, becausethat's what everybody thinks
about today.
But there's so many moreapplications uh that you
(04:09):
interact with day-to-dayalready.
So, some examples, you've gotyour your chat bots, your chat
GPT, your your large languagemodels.
Um I don't think we have toexplain what those are.
Most people have have seen theminteracted with them at this
point.
Um, beyond that, you havethings like recommenders.
So your Netflix recommended foryou, your um Amazon suggested
(04:31):
purchases, Spotify, playlists,things like that.
Recommenders are all AI in thebackground.
Until we got to Chat GPT andthe big uh revolution of
generative AI, no one waslabeling features as AI.
There's nothing on Netflix thatsays, here's your AI generated,
suggested for you list.
It's just a really convenientfeature that is nice.
(04:53):
You're like, yep, that's that'sgreat.
I do want to watch that movie.
Um so you have your yourrecommenders, you also have a
lot of uh vision work, vision,computer vision.
Um things like looking at yourphone to open it is facial
recognition.
When you go to the airport andyou give them your ID now, and
they're you know, it's acomputer algorithm saying yes, I
(05:14):
think it's the same person atthe end of the day.
Um so there's a a large span ofum uh computer vision
algorithms and use cases.
Uh there's your classicclassifiers, so in your
Microsoft inbox, you havefocused and other.
Um that's classifying your spamdetectors, uh, all of that is
(05:35):
just your a standard classifier.
So there's a lot of differentAI or machine learning uh
examples that are far beyondjust the the large language
models.
Um but sometimes it's helpfulto think about those use cases
in defining what it is and whatit can do.
Caleb Ayers (05:51):
Two things.
One, can you explain thisseventh 17th century math
without completely breaking mybrain?
And two, what is what exactlyis the difference between
machine learning and AI?
Meredith Gregory (06:04):
Good
questions.
Okay, so 17th century math, um,it's something called neural
networks, and it's easier withvisuals, so we're gonna go
without them or see where all wedo.
Um it's something called aneural network, and it
essentially is a series of umcomputations.
You're gonna have an input thatcomes in, and our brains, if
you think about your neuralpathways, we receive some kind
(06:26):
of stimulus, and then your braindoes what brain things do, and
some answer pops out on theother side.
You know what to say, you knowwhat a word is, you can read,
you can understand context,things like that.
Um so neural networks is justthe uh mathematical
representation of what'shappening.
So it's um think about it's athink about like a table of
(06:47):
numbers.
This is a gross simplification,but it works.
Think about like a table ofnumbers, it's all just random
numbers.
Um, your neural network is justa bunch of random numbers when
you're born, you don't knowanything.
It's just a bunch ofrandomness, and you start to
learn over time.
So we're gonna provide somekind of stimulus or provide some
kind of data, and then we knowwhat the answer is to uh it's
(07:12):
you know, if we have a pictureof a dog, we know it's a picture
of a dog, and we can tell ourmodel, here's a picture of a
dog, and it's gonna run throughits random set of numbers, and
those random numbers aren'tgonna do anything useful, but
it's gonna give you an answer atthe end that says, I think this
is a cat, which is not.
So you get amount of loss orerror in your model, and then
(07:33):
you're gonna feed that backthrough your model.
It's called back propagation,that says, okay, you're not
right, change something aboutthose numbers and then feed it
through again.
And so you do that back andforth, uh, your forward pass and
then your back propagationmillions of times, so that those
numbers become the exact rightset of numbers that says when I
(07:55):
give you a picture of a dog, youtell me it's a dog.
Um, so it's a little bitcomplex to think about how that
works, but um, at the end of theday, you're essentially just
optimizing a set of parametersso that it produces the answer
that you're looking for based onthe data you provide it to
break your brain.
Yeah, no, no, that makes sense.
Thank you.
Yeah.
So your second question aboutAI and machine learning.
(08:16):
Um, think about big circle,little circle inside of it.
So AI is your your biggestcircle, machine learning is
inside that circle.
So there are things that areconsidered artificial
intelligence that are notmachine learning, but it is a
very small subset at this point.
So they're they are often saidtogether, AI, ml, um, because
(08:38):
they are almost interchangeableat this point.
Okay.
Um machine learning is reallywhen you start feeding it data
and training based on thatlearned set of um data.
Artificial intelligence caninvolve some other things, but
realistically, when people talkabout it today, they are almost
interchangeable.
Caleb Ayers (08:58):
Mm-hmm.
Okay.
Thank you.
Yeah, that explanation of thehow I mean, essentially you were
just describing how AI istrained with the forward, you
know, giving it, giving it theparameters, it's tell it if it's
right or wrong, and doing that,as you said, millions of times.
Correct.
Um I guess for um your role, Iknow a lot of what you're doing
is trying to help, whether it beinternal or or companies in
(09:19):
Southern Virginia or evenoutside of that, be able to
effectively incorporate machinelearning artificial intelligence
into their processes.
What are some of the ways thatyou're seeing companies
integrate AI and machinelearning, and then what are some
of the challenges that comewith that?
Meredith Gregory (09:34):
Yeah, so the
the short answer is slowly.
Um we've gotten to a pointwhere AI is common language.
When I started doing this, Iwould tell someone I was working
with AI, and they're like,what's that?
Which today would sound silly.
Um so now everyone has thissense of AI's here, AI's gonna
stay, we need, we need to get onboard, we need to be involved.
(09:54):
Um, but I don't know what thatmeans at all.
And if I do, if I have a senseof what it is, it's probably a
large language model.
Um, and I want to be able touse ChatGPT in some kind of
effective way.
Um, so there's there's thatmindset.
Ways that people areintegrating AI today mostly is I
feed Chat GPT a document and Iask it to summarize it, or I
(10:17):
throw in an email and I tellthem to make it more
professional.
Um, so you get these verylittle use cases, or I want to
research something using ChatGPT.
Um, there is yet to be justwide adoption of really robust
AI solutions across the board.
Um, I had a conversation withan external partner last week,
and they were like, oh, it'sgood to know we're not actually
(10:38):
that far behind.
But you're having thisconversation, you're actually a
leader in the space.
Um, once you're at you reallywant to dive into a series of
use cases and understand what itis that um AI and machine
learning can bring to the table.
So internally, just someexamples that we've already
started to implement.
Um, we did implement a chatboton the ATDM website.
(10:59):
So if you go to atdm.org now,there'll be a little chatbot,
and that chatbot knows allthings ATDM and it gets used
widely to answer questions,mostly from prospective
students, is where we found itto be the we get the most
traction, um, most questionscoming in in that regard.
Um, we've also implemented atool that does resume
(11:25):
restructuring.
So we have all of thesestudents in ATDM, they're
they're coming in, they need aresume so they can go and do
their interviews and go and meettheir hopefully future
employers.
And traditionally, the team hastaken those resumes and kind of
hand-jammed throughrestructuring, putting certain
ATDM branding on it, making themsound better, making them
(11:48):
aligned to a job description.
Um, and there are many toolsout there today, and um, we've
gone with one that will take ina resume, provide a bunch of
suggestions, restructure foryou, add some branding, and it
saves a ton of time.
The other thing that that toolwill do will be mock interviews
with a um to just an AI agentessentially that's asking
questions.
(12:09):
But it'll grade the student onhow well they did.
Do they need to slow down?
Do you need to speak moreclearly?
Is what you're saying alignedto the job description that you
want?
Um, so there's some veryspecific use cases around ATDM.
There are plenty of use caseswithin the CMA and within the
AMCOE as well.
(12:31):
We are at the early stages offleshing out what it is that
we're gonna do, but um, even ifyou think about this is a fairly
standard manufacturing usecase, we have a bunch of
non-conformist reports, yourcorrective actions, and those
are normally some kind oftext-based description of
something that happened that weneed to fix.
(12:52):
It's something's wrong with apart.
And that's great.
And you want to track them,that's great.
But imagine being able tovisualize uh uh clusters of
those uh reports that say all ofthese reports are all similar.
And the worst thing you can dois have multiple similar
(13:13):
nonconformances.
If something happens and it'snever happened before, you're
like, okay, we've learned thislesson, it's not gonna happen
again.
Well, when it happens again,you haven't really learned the
lesson, right?
So if you can start to justvisualize those nonconformances,
um that would be a greatnatural language processing
example from an a type ofmachine learning where you just
(13:34):
cluster based on and it's it'stext.
It's something thattraditionally is not easy to
cluster.
You can't just throw it inExcel and hit cluster and you're
done.
A couple more steps, but it'snot that complicated at the end
of the day.
Um, so that helps drive uhwhere you're gonna go, how you
adjust your processes, etc.
Um from a manufacturingperspective.
(13:55):
So there's a lot of differentuse cases.
I think the biggest thing thatI tell people is you want a very
specific use case, you want avery specific problem and then
address that problem.
When you go to Google and yousay um AI solutions for
manufacturing, you get this veryhand-wavy, high-level, not
(14:15):
specific solution.
And that it's hard to see, youknow, I'm a manufacturer today
and I want to adopt AI.
I mean, no, I need to, then Iget that answer.
But there's a huge disconnectin how how do I get from where I
am today to doing that?
Um, and there's typically a lotof much, much smaller use cases
(14:36):
along the path to get therethat are very impactful.
Um, you just have to dive intothem.
Caleb Ayers (15:11):
That last part you
were talking about, how you know
the the Google answer or thegeneral answer is gonna be very
vague and nonspecific.
And um I just think about youknow, Jason, who's over our
manufacturing advancementdivision, I hear him talk all
the time about how we're we'refocused on technology we can
help companies implement now andand in ways that are actually
(15:32):
gonna help them.
Um and so what you're saying,you know, making sure that this
very specific use cases, maybeeven specific problems that
they're already having, and howhow this tool, because that's
what it is, AI is a tool.
It's not some end-all be-allsolve all your problems.
Meredith Gregory (15:45):
Well, I'll
make one correction.
To me, AI is a toolbox, andthen all of your different
algorithms are your specifictools.
So your your large languagemodels may be your hammer and
it's easy to go to and you cando a lot with it, but
remembering that there's a wholesuite of other tools in that
toolbox or even in the garage.
Um so yes, it is it'sabsolutely your toolbox.
(16:09):
Yeah, and there's lots of toolsunderneath that.
But yeah.
Caleb Ayers (16:11):
So but that idea of
you know that there's there are
specific problems that this canhelp help alleviate, help
solve, and that's kind of whatour whole approach to uh I know
Jason, they love to use the wordoptimization.
You know, how do how do we helpcompanies optimize their
processes?
That this would be anotheranother thing that fits well
within that.
What do you see as some of kindof the main hesitations or um
(16:33):
even drawbacks for for companieswho are who are looking to
implement this?
Meredith Gregory (16:38):
Yeah, there's
a couple.
Um the first one is a lack ofjust understanding what it is.
Like I said earlier, peopleunderstand AI, or they are they
know the term AI, and there'sgenerally not a complete
understanding of what thatreally means.
And so um they'll they'll say,we know we need to do it, but I
(16:59):
have all of these questions andI don't know what to do now.
Um, so there's there's not manyservices that I see, there's a
lot of very technical solutions,but there's a lack of um Jason
calls it a being a tour guide,taking someone from where they
are and helping navigate towhere they need to be and what
(17:19):
these terms mean and where thepitfalls are and helping avoid
those and whatnot.
So that's something that wecertainly can do it.
We're doing internally and wecan do externally.
Um, so just not knowing all ofthe steps that are required and
having kind of that lack ofclarity around the problem.
Um the once you get past that,then you have a question about
(17:40):
data security.
And most organizations, nomatter the type of organization,
don't want their company'sinformation to just be floating
about for anybody and everybodyto see.
Um we've passwords on ourcomputers for a reason, right?
And so there's a fear that AIwill expose all of this data,
(18:01):
you know, your data's going backand helping retrain models and
things like that.
And it can at times.
So I break them down intothere's public, this is all
around large language models,but um there's public services,
so your chat GPT that you don'tpay for, or your Gemini that you
don't pay for, you are payingfor it in the data that you're
providing.
(18:22):
Um, so yes, in that case, youryour data is being exposed to
these companies, they are usingit to retrain your models.
Um, if you have just veryinnocuous questions and it's no
big deal, it's not reallycompany information, by all
means it's a great tool.
Then you have private serviceswhere you're gonna you're gonna
log in, you're gonna pay for asubscription.
(18:43):
Now you're paying for thatservice in a monetary exchange.
Um, and in most cases, it'simportant to read the fine
print, but in most cases, yourdata will stay your data.
Um, and then that data securityconcern is really no longer
there.
And you can host these modelson you know your Gulf Cloud
instances and and things likethat.
(19:03):
There's absolutely ways to makethem secure.
Um it's just a matter of makingsure you're taking the steps to
do that.
So data security is often wherepeople then are they're like,
okay, I got it, I understand it.
Wait, but now data security.
Um and then they get throughthat, they're like, okay, I get
it, that makes sense.
We can do this in a very secureway, that sounds great.
Now what?
And the next step is arounddata.
(19:25):
Um, so you we talked a littlebit about training models.
And if you're gonna customizeany model in any kind of way,
you can't do it without data.
And a model only knows whatdate what it's been taught
through the data that youprovide it.
Um, so if organizations don'thave data in any kind of
consolidated way, they don'thave any data, they don't have
(19:46):
the data that answers thequestions that they want to have
answered through these tools,that's where you get into some
deeper conversations about,okay, really what what are the
next steps?
How do we get there?
Um it's a data readinessproblem is is gonna be the next
big step.
Caleb Ayers (20:01):
So on that topic of
data and training, um, I know
the the old idea of you knowgarbage in, garbage out, and you
mean you see that with you, Imean you see that with ChatGPT.
If you ask it a very vaguequestion, it's gonna give you a
very vague generic answer.
Um or or I should say if youask it to to generate something
for you, whether that be anemail or or um whatever, it's
it's gonna be it's only gonna beas good as the instructions you
(20:23):
give it and the data it'sprovided.
Um how does that I guess howdoes that idea apply for not
just the large language models,but for the other um types of of
solutions that you're talkingabout?
Meredith Gregory (20:35):
Yeah, so let's
we'll use um a manufacturing
example, let's do defectdetection as our use case and
we'll walk through it that way.
So if if I've got um parts thatare coming across and we're
doing some kind of inspection tomake sure that there's not any
major defects, computer visionwill allow you to go in and
identify defects, right?
So I can say um, does this parthave a defect?
(20:59):
Yes or no, or find defects, twodifferent, you've got image
classifiers or object detectorsas your two different types of
models there.
Um to train those, you wouldprovide a bunch of data of what
a defect looks like, and a bunchof data about what a perfect
part looks like.
So if I train a model and I'vegot pretty standard defects that
(21:23):
I see all the time, and I havea bunch of good examples,
hopefully, it's a lot of a lotof good examples and only a
couple defects examples.
Uh, and you feed that in, themodel will understand what that
type of defect looks like.
But say all of a sudden amachine crashed, something crazy
happened, and the defect lookscompletely different.
(21:44):
The model may think it lookslike a defect because it looks
more like the defective examplesI've given it, but it may not.
Um, and it may say, I have noidea what to do, I think it's
50-50, we'll say it's fine.
Um, so you just have to thinkabout the data you've provided
it to make sure that it is acomplete set of what in this
(22:06):
case what kind of defects youmay run across.
Um but across the board,whatever kind of data you
provide is the only thing thatthat model knows.
Um so if you leave out a bigsegment of data that would be
helpful to creating the correctanswer, um you just cannot get
there.
Caleb Ayers (22:25):
As you were using
that example, I was thinking
about uh the a lot of the thelarge language models, what do
they call it, hallucinating,where they'll they'll make stuff
up if they don't know theanswer and they sound very
confident.
Yeah.
Um does that happen across uhas you're talking about like
anomaly detection and thingslike that?
Like does are the do thoseissues arise in those areas as
well, or is that just with thatthe large language models?
Meredith Gregory (22:47):
Um yeah, it's
a good question.
So large language models fallinto this category of generative
AI, and with generative AI, youare generating some kind of new
content, whether it's text oran image or voice or something
like that.
Um at the end of the day,you're just predicting the next
word or the next pixel, whathave you, but um, there's a
generative concept involved.
(23:08):
So that's where this concept ofhallucinations came from, is
that it's generating somethingthat's just not correct.
Um, when you get more on thepredictive side, you can
absolutely have wrongpredictions.
Um that happens uh depending onhow your model's trained more
than you want.
Um so you can have incorrectanswers, but because in most
(23:30):
cases there is an answer, it'snot generating some kind of new
text.
They're not consideredhallucinations, they're just
considered errors.
So you can have right yourfalse positives or your false
negatives, um, depending on theuse case, one is worse than the
other, but they're an error thatyou can measure and not just a
uh new weird concept orsomething that's just yeah.
Caleb Ayers (23:55):
That makes sense.
If you were asked by a companyto give your 20-second elevator
pitch for kind of why theyshould consider any type of AI
incorporation into theirprocesses, into their toolbox,
um what what's kind of yourelevator pitch?
Meredith Gregory (24:13):
Yeah, I think
when it is a tool, there's
plenty of other tools out there.
You know, my the other half ofmy title is program management,
just good process and programmanagement can go a long way as
well.
Documenting things can go along way.
Um, so this is another tool andcapability to help create
efficiencies.
When applied correctly, it canhave massive impact.
It is the tool that willautomate the tasks that nobody
(24:38):
wants to do.
It's that task that someone'slike, if I never had to do this
again, I would be thrilled.
And that is where artificialintelligence absolutely shines.
Caleb Ayers (24:47):
Thanks for being
here today.
Thanks for the rundown on allthings AI.
Anything else you want to add?
Meredith Gregory (24:52):
No, thanks for
having me.