Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Kim Swanson (00:03):
Welcome to AASHTO
Resource Q&A.
We're taking time to discussconstruction materials, testing
and inspection with people inthe know.
From exploring testing problemsand solutions to laboratory
best practices and qualitymanagement, we're covering
topics important to you.
Brian Johnson (00:18):
Welcome to AASHTO
Resource Q&A.
I'm Brian Johnson.
Kim Swanson (00:22):
And I'm Kim Swanson
, and today we have Mike
Copeland from the Idaho DOT withus.
Welcome, mike Hi.
Mike Copeland (00:30):
Thanks for having
me.
This is going to be fun.
Brian Johnson (00:32):
The topic that
we're talking about today is the
use of AI as it pertains to theindustry that we're in, which
is construction materialstesting.
Mike has gotten involved withAI in his role, so I'm not going
to define Mike by his positiontitle, because that is a very
(00:55):
unusual thing for somebody whois in construction materials to
also be involved with.
So, mike, can you tell us whatyour title is and what kind of
work you do at the Idaho DOT?
Mike Copeland (01:07):
Yeah, so I'm the
Quality Program Manager,
construction Materials Group.
Out of our headquarters officeI deal with anything quality
assurance related on theconstruction materials side of
things, and then I dabble withai a little bit um and just try
(01:27):
to apply it to any anythingquality assurance or asphalt
pavements related how did youstart getting involved with ai
in this in this capacity?
so, kind of like everyone else,chat gpt came out and uh, um,
sounded really cool so I playedaround with it a little bit.
But before that I had gotten alittle bit more involved with
(01:48):
data science type stuff and alot of trying to analyze our
pavement data and like ourconstruction data and trying to
get the data out of PDFs andthings like that, because we're
all you know, we have apaper-based system but we scan
all of our documents, so we'rean electronic system.
We don't have a paper-basedsystem, but we scan all of our
documents, so we're anelectronic system.
We don't have a LIMS system oranything, so all of our data is
(02:09):
kind of locked down in PDFs.
Trying different techniques toget that data out instead of
just sitting there and 10-keyingin numbers yeah, it got me
involved with like using R orPython a little bit, trying to
find various methods.
And then generative ai came outand so I tried chat gpt and
found it really useful for likea lot of different things, and
(02:30):
then kind of found that it wasalso really useful once the
vision models came out, so theones that are able to look at
photos or videos or things likethat it's able to pull.
Pull information out of pdfs instructured ways like it's
different than ocr, but it'skind of like OCR but keeps them
balanced better than OCR.
It's structuring your testreports into something you can
(02:53):
do analysis on.
Kim Swanson (02:54):
So for those who
may not be familiar with OCRs,
because I totally know what thatis what does that stand for?
Mike Copeland (03:00):
It's for optical
character recognition.
Kim Swanson (03:03):
Oh yeah, of course,
that's obviously what it's
there for.
Mike Copeland (03:05):
That makes sense.
Brian Johnson (03:06):
Makes sense,
typically what people do when
they move from one technology toanother.
You know you go from paper toPDF, to then some sort of
digital management, datamanagement.
I guess the normal thoughtprocess would be like okay, well
(03:27):
, I have this thing that canread the pdfs, I'm going to take
it, have it, do that and haveit create it into like digital,
like transform it into like, uh,data storage and and a database
.
But now with AI, you don'treally have to do that anymore,
but maybe you do it anyway justfor the sake of having that data
(03:51):
.
But did you go through that ordo you just use AI to read
what's in the PDFs and not worryabout that transition?
Mike Copeland (04:00):
Anymore, I just
use AI to read the PDFs and if I
have sometimes, you know, if Iwant to analyze multiple years
of test data, I'll cycle throughPDFs.
So it does one at a time, kindof like a big batch of PDFs and
then put it into like a datatable or a CSV document or
(04:22):
something, and then feed thatinto AI data table or a csv
document or something, and thenfeed that into ai.
Um, but if it's just like oh,like a week's worth of
production data, like forasphalt pavement, so you got all
your test reports, you got yourum hot plant printouts, you
know your 15 minute recordation,you got all your bill ladings,
all your daily work reports,things like you can kind of just
(04:43):
drag and drop it all into AIand just start asking questions
and it can start, it can lookinto things.
I mean AI.
When I say AI, I mean likegenerative AI, large language
models, chat, gpt, gemini typethings.
They'll sometimes lie to you ormake mistakes, do what's called
hallucinate, so you've got tofact check things.
(05:04):
They'll sometimes lie to you ormake mistakes, do what's called
hallucinate, so you got to factcheck things, but generally it
does a pretty good job.
Brian Johnson (05:11):
So what would, in
a situation like what you just
described, what would be atypical question you might ask?
Mike Copeland (05:16):
Oh, it depends.
If I want to extract out all ofour let's keep talking asphalt
pavement test data, all of ourlet's let's keep talking asphalt
pavement test data, um, so we,we have source document
requirements with like handwhere everything has to be hand
recorded onto, uh, onto a sourcedocument, um, as the original
source of record, and, uh, Iwould ask you to extract out all
(05:39):
that data and and perform thevolumetric calculations or
something like that, and it candefinitely do that.
Sometimes you have to test outyour prompts.
So your prompts are like theinstructions that you're giving
it.
You have to test them out andtry it multiple times.
Sometimes, but especially inthe last six months, because
this keeps evolving you can giveit some pretty basic
(06:02):
instructions and pretty much anylarge language model will get
it right off the first go forthat kind of stuff.
Brian Johnson (06:09):
You're largely
using this for extraction of
data and information from testreports, or how else are you
using this in testing?
Mike Copeland (06:18):
So I'm also like
stress testing, or what I like
to call like red teaming um ourquality assurance specifications
, looking for weaknesses, um,our quality assurance
specifications, looking forweaknesses, um doing the same
thing with like test methods or,like you know, using it to
clarify things, exploring ideaslike one off, proofs of proofs,
of concepts, things like thattoo, all kinds of different
(06:38):
stuff.
I've been testing it out alittle bit with like old dispute
resolution claims and feedingit all in all the data to see
like, okay, how would, how wouldai respond.
A lot of times it's prettyclose to what like a grb board
does, so it's pretty interesting.
(08:11):
I think and I've talked to a lotof other DOTs about AI here in
the last six months it soundslike it's a full range, like
some people have never tried it.
Some people have, you know,maybe tried it once when it was
first released and thought, eh,this isn't anything special.
And then there's a few that Ithink are probably using it all
(08:31):
the time and maybe not talkingabout it.
But I think us as an industryneed to talk about it more
because there's so muchpotential here with, like, how
we can apply it to our everydaywork.
I mean, it saves so much time.
I don't know that there's awhole lot of adoption.
Kim Swanson (08:48):
Where do you think
that would be the easiest for
other DOTs and other people inyour position, Like what's the
gateway?
The gateway access to adoptingthis type of technology and
testing and in quality andthings like that?
Mike Copeland (09:04):
Yeah, that's
funny that you asked that.
I was actually just thinkingabout this yesterday because
I've been trying to get some ofmy co-workers using it more to
help them.
I got to thinking about it.
I mean, we've all worked inthis industry for a while.
We probably all are a littlefamiliar with contract
(09:25):
administration, especially uswith DOTs.
I was thinking about ityesterday, that kind of the way
that you'd use LLM.
Well, first off you just got togo use it and mess around with
it.
But you use it like youinteract with it, like you would
interact with a contractor andthat whole trust book, verify
thing.
And then just think of yourprompting as the specification.
(09:49):
So you're writing the rules andthen you're guiding the output.
So you have to know the subject.
At least that's what I find isI kind of have to know the
subject to really successfullyuse AI, otherwise it's going to
hallucinate and make things upand you aren't going to know.
But if you know the subject itcould really speed things up for
you.
Brian Johnson (10:09):
Yeah, I know, one
of the things that people have
been kicking around lately isusing AI as a I don't know if
the word replacement is rightbut a tool for replacing the
physical testing of materials orspecimens.
(10:32):
What would you say about that,mike?
Mike Copeland (10:36):
Uh.
So the other day, just in anafternoon, I was messing around
with AI trying to uh just endsome territory data and I was
trying to see if there's a wayto take the take some of the
subjectivity out of, out of uhGMB testing, the SSD part of GMB
testing for asphalt pucks andum had AI help me write a Python
(11:00):
script that then did amultilinear where I kind of
selected the variables that Ithought might have the impact
and from all the other testing,so like t308, 166, gmm testing
and grade H and things like that, and see if I could come close
to predicting the bulk-specificgravity without doing
(11:22):
bulk-specific gravity testing,and used AI to write a script to
scrape all of our central labdata for the last five years and
then parse out the same thingwith AI, used it to pull out all
of our gyratory data and justkind of had this big data set
all from one laboratory multiplegyratories though and was able
to build this model that wasable to predict within the D2S
(11:46):
precision.
So that would have been withinlab, but then I started
comparing it to other labs andyou could see the different
gyratories.
You know these aren't companionsamples or anything like that.
But you could see the differentgyratories.
You know how they flexdifferently or whatever, how
they can pack differently.
You'd see the differencesbetween the models.
But it was all within I don'tremember the number off the top
(12:08):
of my head but around 99%confidence that the measured GMB
would be within the 0.007, if Iremember right of the predicted
GMB without doing GMB testing.
So it seems pretty promising.
So we started kind of usingthat as like a red flag
diagnostic tool.
But I want to look at it alittle more and start comparing
(12:32):
more data and maybe evenconsider like, do we need to do
bulk testing?
Brian Johnson (12:38):
Do we need to do
bulk testing?
This is what I want to get to,because, as a DOT materials
testing lab, you know you'redoing QA on projects, but you
know you've just uncovered howeasy it could be for somebody, I
guess on QC or QA, to just makeup numbers.
Right, absolutely, they justmake up numbers that are
(12:59):
plausible, which creates somerisk because it's not actually
tested and reflective of thematerial that the DOT is paying
for.
So with that knowledge, I meanit's good, you've done the
digging, you know how it worksnow.
It works now, and so you you'rein a good position to be able
(13:23):
to explain the concerns and therisks that there might be of
somebody doing that.
So what so with that?
With that, what do you do?
What do you do with thisinformation you have now?
Mike Copeland (13:32):
I've been
exploring risks, risks a lot,
and I think, um, I'm of theopinion right now that our whole
quality assurance system is outof date.
It's got weaknesses.
So it's all built kind ofaround a paper system that we've
adapted to as we moved into thedigital age.
(13:52):
But as we moved into thedigital age, now we have like
CSV files or we have phyloggraph files with an ASTM format
or we got standards and even wegot some encryption.
But playing around with AI,I've found that pretty much any
(14:13):
of those you can sidestep, youknow, any security features and
pretty much game any qualityassurance practice that deals
with data.
Ai.
It's not inherently good, it'snot inherently bad, but if you
give it technical instructions,give you the output.
If you prompt AI right, you canget instructions.
You can modify test resultswithout changing any of the
(14:36):
metadata, things like that.
It's kind of scary.
So I think we need to justrethink our whole quality
assurance practice now thatwe're dealing with AI.
Brian Johnson (14:47):
Yeah, I kind of
wonder about that with the
proficiency samples as well,because I mean if somebody just
imported all of the rounds thatwe have available and say you
know what answer would give me asatisfactory rating for all of
these?
I am sure that there are somenumbers that would probably work
.
So it's going to be harder forus to tell if people are doing
(15:09):
that.
But the only thing that I wouldcaution people against is one
of the things that we one of thereasons why we have the
proficiency samples is iteliminates the need for more
onsite assessments, because it'slike a check in between.
So if you want to have us go toannual on-site assessments,
then go ahead and cheat, andthat's probably where it's going
(15:33):
to go is more on-siteassessments, because then we
can't rely upon the results fromthese checks done through the
proficiency samples.
So things are going to changeas people are using these or
misusing these tools.
Other systems are going to haveto adapt to account for that
and if we're not getting thequality that we're looking for,
(15:56):
things are going to change.
So what are some things whenyou've been thinking about all
this risk, mike?
What are some things you'rethinking Idaho DOT might have to
do to account for some of this.
Mike Copeland (16:08):
As we move into
this world of AI, I've been
trying to come up with some ofthose kind of answers too, or in
the past, with qualityassurance, at least in Idaho and
I think most other DOTs wealways focus on uh, you know,
chain of custody.
Chain of custody is a big thing, like material sample security,
things like that, and I thinkwe need to consider data
(16:30):
security too.
Like what's do we have chain ofcustody on on from the source
to to right now on this data and?
And if we don't, then weshouldn't be trusting the data.
You know, stealing informationis like a really big risk, like
in cybersecurity, but datapolluting is another big risk.
(16:52):
That was recently I guess notreally that recently identified,
but it's been identified aslike being a big risk and a
growing risk.
And I think that holds true toquality assurance and in our
industry as well is did the datachange?
Were those results adjusted oranything like that?
And there's ways to identify it.
(17:14):
I mean you start looking forpatterns or you increase your
independent verification.
There's definitely things wecan use AI to help prevent the
potential fraud with AI, butit's changing so rapidly.
Every week there's new AImodels released or new updates
happening that by the time.
(17:36):
I think it's like a continuousimprovement process.
We've got to be on our toes andbe kind of agile in the way
that we're approaching this.
Brian Johnson (17:46):
It's a great
thing.
Like you're talking aboutrebuilding your quality program.
I mean, how nice is it like?
For when I think about ouraccreditation program, if you're
a new lab coming in, you coulduse AI to help you write your
policies and procedures, andthere's really nothing wrong
with that.
You can eliminate some simpleerrors and get something that's
(18:08):
you know, 80% there, and thenall you have to do is go in and
make it your, you know, kind ofcustomize it to make it yours
and there's nothing wrong withthat, right.
But if you don't do that extra20%, then you're probably going
to have some problems becausethings aren't going to make any
sense.
But it can get you there and,of course, I could see a
situation where we eventuallyuse it to help with audits too.
(18:33):
Right, and that could save alot of time too and be more
efficient and perhaps improvethe standardization you know,
eliminate some of thesubjectivity with audits.
So like there's a lot of goodthings that can come out of this
on on all fronts.
Right, but we just have tofigure out how we can use it.
Were there, were there anyother ways you were thinking of
(18:56):
using AI as a tool at the DOT,of using AI as a tool.
Mike Copeland (19:02):
At the DOT I
built a tool and then wrote a
report on it and the whole timeit took me was like five hours
and this was a couple of weeksago and just like an afternoon.
I kind of timed myself just tosee how long this was going to
take.
But I made a tool that so wehave our asphalt testing source
document sheets with all thehandwritten data on it and I
(19:23):
made a tool that's a drag anddrop tool, because there's a lot
of testing information on therethat goes into it like an Excel
spreadsheet.
It's a drag and drop tool thatyou drag the, you snap an image
of the handwritten dirty sheet,upload it into this app and then
the app fills out the Excelfile and saves it for you and so
(19:46):
automatically.
So you don't have to do dataentry.
And then you go through and youmake sure all the numbers
transitioned right, but 95% ofthe time it's right or it's
really obvious when it's notright.
So instead of like transposingnumbers, ai doesn't transpose
numbers.
It might drop a decimal or itmight just skip over that field,
(20:06):
but it's not going to transposea number.
So the errors are easier tocatch than typical data entry
errors you normally see and itsaves so much time.
I mean it takes 10 seconds tofill out a form from a source
document instead of whatever 30minutes, 15, 30 minutes, wow.
Brian Johnson (20:29):
Yeah, that's
great Cause.
Then it's like you do that youinvested the five hours and now
you're saving time forever afterthat.
Mike Copeland (20:37):
Right, Right,
Right, and it's scalable.
I mean, uh, I gave it to acouple.
I gave this app to a couple ofpeople within our group to use
uh, like our lab folks and Ihaven't gotten any feedback yet,
but it should save them a tonof time.
And now, if we scale this outto every tester in the
department, we're looking athundreds, maybe even thousands
(20:58):
of hours saved annually wherethey're able to do something
other than 10K in numbers.
Brian Johnson (21:06):
Now, speaking of
time savings, I want to ask you
about the AI chatbot.
That seems like a really goodtool for getting answers to
people quickly and, hopefully,accurately.
Can you tell us about how youdeveloped it and what you intend
that to be used for?
Mike Copeland (21:27):
I built that
maybe a year, year and a half
ago, kind of using an older,what would be considered now an
older technology in AI, but atthe time it was it was pretty
new and basically what you'redoing is called RAG R-A-G which
stands for retrieval, augmentedgeneration.
So it's taking all your data,so you put in I put in like all
of our specifications, all ofour manuals, different memos,
(21:53):
all of our research that we'vedone over the years and put it
into like this database thing.
And so then when the user asksa question, it goes and searches
that database, pulls anythingrelevant, pulls and then takes
all that relevant stuff, thequestion, and sends it to the
large language model.
(22:13):
And so now it's responding withall this context.
And nowadays it's super simpleto set those up.
I mean you can take yourmanuals and set this up and run
it locally on pretty much anycomputer 20, 30 minutes maybe to
get something set up like onyour own computer, yeah.
Brian Johnson (22:35):
And who's using
it.
Mike Copeland (22:36):
We have it
available to, to ITD employees
it's meant for, like inspectors,testers, you know the resident
engineer.
Anyone else that has a questionabout our specifications
internally, at least right now.
Brian Johnson (22:50):
Yeah, so this is
a closed system.
Mike Copeland (22:53):
Currently yeah.
Okay, all right, so you'resaying currently, what's your
plan?
I have no idea.
I could see where it could bebeneficial for all kinds of
industry.
You know contractors and ourconsultants and stuff too.
Brian Johnson (23:19):
Having like
reasonable parameters to look at
for data, or being able to savetime on data entry, or I mean
just there's a lot of good,useful things and being able to
ask about about your standards,like your state methods on
something hey, what, what's thatagain?
What does it say?
How long do I do this for?
And boom, you got an answer.
Mike Copeland (23:32):
I think that's
really handy one of the really
cool things about that thatchatbot that we have a lot of
different manuals and as we weretesting out the chatbot and
using it, asking questions, wenoticed, well, that that's not
right.
But then it would citesomething in a manual that
conflicted with another manual,made it really clear like, okay,
(23:53):
here's where we have conflictsin our, in our, in our uh,
current published documents.
That's kind of been anunintended benefit of the whole
thing.
Brian Johnson (24:04):
That would be
tremendous for AASHTO standards
to use, like because we do haveI mean, you're talking about
bulk specific gravity and riceand all these and there are all
these like, let's say, an ovenor a balance.
You know these pieces ofequipment that are used in
multiple standards.
Let's say, an oven or a balance.
(24:25):
You know these pieces ofequipment that are used in
multiple standards.
Wouldn't it be nice to say,like is it the same balance or
what can I use this balance for?
Which test methods?
And all of a sudden, you cankind of figure things out a lot
faster than if you were pouringthrough all of these documents
on your own.
Okay, so we're talking aboutstandards now.
So I'm going to ask you anotherone that is tricky how do you
deal with concerns aboutcopyright, intellectual property
(24:47):
and personal information whenyou're using an AI chatbot or
any of these AI tools?
Mike Copeland (24:55):
I've tried using
models downloaded locally, like
onto my computer.
They're pretty good.
Obviously, at that point all mydata is localized to my
computer so there's no PII risksor anything like that, like
ChatGPT or Gemini or any of theothers that are out there.
I really pay attention to theterms of use and how they're
(25:18):
going to use my data or ifthey're going to use my data and
generally the models that I'musing, which are a lot of the
different models.
They have opt-out options orthey won't use your data to use
a certain platform.
Kim Swanson (25:32):
I make sure that I
don't use a model or a site
that's hosting a model thatthat's going to be used my using
data for training one of thethings that came to my head when
, mike, you were saying it'sidentifying like the unintended
benefit of identifying somediscrepancies between your
materials, or like the manualsand standards and practices is
(25:55):
that I think that would be veryinteresting for, like the Ashton
or ASTM method versus thestates that have their own
methods for something, and tosee really what's the difference
and what is it not, because Ifeel like there's a lot of times
states are using their ownmethods when it's really not
that different or not differentat all from AASHTO or ASTM.
Brian Johnson (26:17):
You know, we've
got all these different
standards, developmentorganizations, including all the
DOTs, and if you could dump allthose state methods in and say,
write a standard thatincorporates all these
requirements, or give me adocument that incorporates all
these requirements and maybe ithighlights the differences, and
(26:37):
then you ask that let's say it'sIdaho, say Idaho, do you really
care that much about?
You know, like, I'll pick somearbitrary thing that we were
talking about today and one ofmy team meetings, which was the
length of a spoon, which is aninsane thing to specify, uh, a
really clear, like it has to bethis the many inches long, is it
(27:02):
okay?
If it's this, you know, like,if there was something like that
, it's like okay, how marriedare you to that length of the
spoon?
Are you okay getting rid ofthis and just getting along with
everybody else and just sayingwe don't need to do this?
Well, you could probablyidentify those things a lot more
quickly.
Mike, do you happen to uh thechat bot available on your
(27:26):
computer right now that we couldask some questions?
Oh, where you get to see it?
Kim Swanson (27:32):
yes for those
watching on youtube.
Wow, you are going to have anexperience.
We're going to actually see theanswers so you don't have to
listen to it.
So shameless plug to go over toyoutube so you can see it is.
Brian Johnson (27:45):
Does this chat
bot have access to your like
project data?
Mike Copeland (27:50):
I know, not
project data oh, not, probably.
Brian Johnson (27:53):
Okay, I can't ask
this question, then I was going
to ask it about a?
Um, something on pavements inidaho.
Um, does it, does it knowanything about?
Like, if, if you were to ask ithow many miles of asphalt
pavement need to be repaved in2028 in idaho, would it be able
to answer that I don't know,let's find out.
Mike Copeland (28:15):
Couldn't answer
that either, but could I?
Can I show you a tool that can?
Brian Johnson (28:19):
yeah, how many
miles of asphalt pavement need
to be repaved in 2028 in Idaho?
Mike Copeland (28:32):
I'm adding, use
your search tool to find out.
Okay, seems like it helps it toremember that it's able to use
tools when I do that.
So now it's going to search theweb.
This probably is available onour website.
A lot of times I look throughthe thinking as I'm prompting
because I use theseinteractively, kind of back and
forth with the AI.
(28:53):
I find that if I look at thethinking or the reasoning of the
model, it'll let me go back orit'll help me find holes in my
prompt and then I can go backand edit my prompt.
I guess after it's done running, it'll let me go back and edit
my prompt and like rerun it so Ican fix my prompt and try
different things and plug theholes so that it gives me
(29:17):
exactly what I'm looking for.
Brian Johnson (29:19):
Okay.
Kim Swanson (29:21):
I've also heard
that if you ask like chat, gpt
or something like that is like,act as a prompt designer, how
would you ask this or how wouldyou ask that like, and that can
help you narrow down your yourprompts for that.
If you just like ask them toact as a prompt designer, then
(29:41):
it will help you formulate yourprompts better, absolutely.
Mike Copeland (29:47):
That works really
well.
Brian Johnson (29:48):
Wow.
So here we go.
We got our answer 151.4 milesapproximately, with individual
projects that need to happen andwhat needs to be done to it.
Lots of seal coats going on inIdaho it looks like it's citing
its sources.
Mike Copeland (30:05):
Let's check out
what it's citing.
Yeah, 2025 to FY2031.
Hi, tim, that's very cool sowent out and found that that's
pretty cool.
Brian Johnson (30:15):
Yeah, that's a
good one.
I can't believe it was able tobe that exact.
Mike Copeland (30:20):
There's a few
tools and stuff that I think
would be useful to share withpeople.
So, like the other day, we werelooking at some split sample
comparison testing, trying tofigure out the difference
between labs, and I wanted tolook at the gyratory data a
little bit closer.
So, instead of like trying toplot out gyratory data, I made a
(30:41):
drag and drop tool that youselect a gyratory file and plot
it out angle pressure moment andI have another version
somewhere that, like you can domultiple gyratory files, so I
(31:06):
was able to just like createthese cool visuals that then I
screenshot and put into like mywrite-up on.
Okay, this is why thedifference in the test results,
because the gyratories arecompacting differently.
But it's just like a drag anddrop tool that I needed once,
but now it's pretty handy.
You can do this all day longand this took like 10 minutes to
build, wow.
Kim Swanson (31:27):
That's really cool
and you're using like it's.
One of the concerns that I have, just as a member of the public
, is when you were talking aboutlike the possibility of the
people might not actually beformed, the testing, and just
use AI to give you answers oflike yeah, this is probably what
it will be about.
Even if it's really accurate,that just like kind of frightens
me.
Like there's not someoneactually testing it.
(31:49):
But if this is just taking thedata from the test or the
results from the test and givingyou a different way to look at
it and interpret it, that I love.
But when you were talking aboutyou know, like it's guessing,
like what this is.
It's really accurate.
I'm like, oh, that seems reallynot great, but again, I don't
really know anything.
That just is like me beingscared of you know a bridge
(32:11):
falling or something I don'tknow, like that kind of stuff.
Brian Johnson (32:14):
Can we go back to
your in-house one for a minute?
So I guess these models they'reable to pull information that
you've given them, but theyaren't necessarily storing their
own information.
Mike Copeland (32:28):
Correct, correct.
Yeah, it doesn't this tool.
It doesn't go out and searchthe Internet or anything like
that, doesn't go out and searchthe internet or anything like
that.
It's answering just based offthe documents I gave it, which
is like our standardspecifications and our quality
assurance manual and ourcontract administration manual,
things like that.
So we could ask it about likepast research project and what
(32:49):
the findings were.
We could ask it.
You know something aboutaggregate requirements and it
will know those things, but itwon't know anything that another
user is asking.
Brian Johnson (33:00):
Oh okay, most of
my questions are dumb questions
that I thought would be funny toask, so I don't really have
anything interesting left to askabout the AI chatbot.
Now that you've started usingall these tools, I imagine that
you have had interests.
Well, I don't know how manypeople know about what you've
been doing, but are you gettingquestions from other DOTs, from
(33:26):
other departments within thestate government of Idaho, like,
hey, can you help us with this?
Can you tell us what you did?
Are you being inundated with alot of these kind of questions
now?
Mike Copeland (33:36):
Are you being
inundated with a lot of these
kind of questions?
Now, not a ton.
I've definitely talked to otherDOTs around the country and
some university research groupslike how I'm using AI, how do we
get into this?
Just use it.
But yeah, there's definitelybeen a lot of conversations with
different groups, which hasbeen really fun because we share
(33:59):
what I'm doing and it's reallyinteresting to hear what they're
doing, because, I mean, I'm noexpert, I'm treading water and
drinking from the fire hose, andthis changing every single day.
It's always good to hear likeokay, how are you using it, what
are you using it for and what'ssuccessful for you?
Uh, like the other day, uh,someone was telling me that they
(34:21):
were using it to, instead ofwriting down test results, they
were using it, as you know,dictating to it to like, okay,
here's my uh with my rice bowland saying it out, saying it out
loud, so not having to walkover to the pen and paper every
couple seconds.
They're like it saves me hoursevery day.
(34:42):
I'm like, oh, that's cool, Ihadn't even thought about that.
Yeah, it's good to haveconversations with people and
see how they're using it andshare what you're doing or what
they're doing.
It's also new.
Brian Johnson (34:52):
Absolutely, and
we are going to be seeing you
soon at the AASHTO Committee onMaterials and Pavements meeting
in Hartford, connecticut, and Ithink we're going to be talking
more about this and hopefully wecan find out if there are any
other people like you in yourposition at the other states
that are also messing aroundwith this and see if we can
start to get some best practicestogether and maybe even talk
about it at the next AASHTOResource Technical Exchange.
(35:13):
Maybe even talk about it at thenext AASHTO Resource Technical
Exchange.
I believe that you are going tobe having a conversation with
Bob Lutz of our office aboutpotentially doing that.
Hopefully we can get somethinggoing and I think that would be
a really interesting topic foreverybody there.
So for those of you out therewho listen to this, who also
attend the technical exchange,that might be a good session in
(35:36):
Kentucky in 2026.
So stay tuned for that and, kim, any last questions that might
be a good session in Kentucky in2026.
Kim Swanson (35:42):
So stay tuned for
that and Kim.
Any last questions?
No last questions, but I'mgoing to start the plug early
for the 2026 AFSHA ResourceTechnical Exchange, which will
be March 9th through 12th inLouisville, Kentucky, and we're
having a virtual technicalexchange November 5th and 6th.
They're both half-day eventsand there'll be more information
on our website about both ofthose events at ashtoresourceorg
(36:04):
slash events.
Here's your quality, quick tipof the day.
A common problem with QMSdocuments and records is that
they're out of date.
It may help to enter due datesand automatic reminders into
calendars to help keep youorganized on time and in
compliance.
You can learn more by going tothe ReUniversity section of our
website and check out the Roadto Developing an Effective QMS
(36:28):
Articles for more information onthis topic.
Brian Johnson (36:31):
All right thanks,
and Mike, thank you so much for
your time today.
Good luck with all your futuremeddling with the databases and
figuring out new AI tools.
I have a feeling that all ofthe time that you're investing
now is going to pay off for alot of people moving forward
very soon.
Yeah, it's been fun.
Mike Copeland (36:51):
Thanks for having
me.
Kim Swanson (36:52):
Thanks for
listening to AASHTO Resource Q&A
.
If you'd like to be a guest orjust submit a question, send us
an email at podcast atAASHTOResourceorg, or call Brian
at 240-436-4820.
For other news and relatedcontent, check out AASHTO
Resources social media accountsor go to AASHTOResourceorg.