Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to AASHTO
Resource Q&A.
We're taking time to discussconstruction materials, testing
and inspection with people inthe know.
From exploring testing problemsand solutions to laboratory
best practices and qualitymanagement, we're covering
topics important to you Q&A.
Speaker 2 (00:20):
I'm Brian Johnson.
On today's episode we're goingto do something a little
different.
I gave a presentation at the57th Mid-Atlantic Quality
Assurance Workshop, which was inHanover, maryland, last week,
(00:50):
and I talked about our strategicroadmap and kind of shifting
from strictly conformance toworking more on continual
improvement with the audience,and I felt like it was a good
one, a good message to get outto the podcast audience.
So I asked them for permissionto broadcast it and they said,
yeah, go for it.
So I am presenting mypresentation to you that I gave
(01:14):
at this workshop.
Now, the Mid-Atlantic QualityAssurance Workshop.
Like I said, this was the 57th,so they've been doing this a
long time.
The Mid-Atlantic states arePennsylvania, new Jersey,
delaware, maryland, westVirginia and Virginia in this
group, so they rotate theirmeetings throughout that region
(01:34):
and they have quite a bit ofuseful information for people.
It appeared that there were alot of people from the
construction testing industry.
There were DOT people,researchers, associations, quite
a few people there.
So it's a three-day conferenceor two-and-a-half-day conference
(01:55):
and it's pretty well attended.
If you're in the regionespecially, you might want to
check it out.
It's going to be hosted inPennsylvania next year for the
58th in Hershey.
So if you're interested insomething like this, check it
out for next year, and I believeit's usually around the same
(02:16):
time, so mid to late February.
All right, enjoy mypresentation, okay.
So a lot of people don't reallyunderstand who AASHTO is Like.
They think about the staff thatmight come into the laboratory
from AASHTO Resource as AASHTO.
But AASHTO is really the stateDOTs, right?
(02:37):
So if you are a DOT, you knowthat You're the AASHTO member.
You're the ones who set thestandards, you're the ones who
kind of give us direction onwhat to do and the way AASHTO is
laid out.
There's the board of directors,there's the Committee on
Materials and Pavements and thenthe AASHTO Resource
Administrative Task Group is asubset of the Committee on
(02:59):
Materials and Pavements.
So there's a chair and fourregional vice chairs and those
are the ones who make thedecisions on accreditation.
So, for example, we had asituation recently where a
laboratory had an absolutelyhorrible assessment I don't even
know how many nonconformities,but it was over 100 by a good
(03:22):
number and in those cases whatwe do is we would say, okay,
this laboratory shouldn't beaccredited, they shouldn't be
getting contracts, theyshouldn't be doing anything
really other than fixing whattheir problem is.
So what we do is we take thereport and we send it to the ATG
, the administrative task group,and say we'd like to suspend
(03:45):
this laboratory's accreditationand not give them a chance to
respond because they don'treally deserve it at this point.
And then we get a decision.
So in this case the decisionwas to suspend the accreditation
and things worked the way theyshould have.
And now action is being takenby the laboratory to get in a
position to get theaccreditation back, but they
don't get to go through thenormal process because they have
(04:07):
forfeited the right to do that.
So those are the kind ofdecisions that the ATG makes and
that can happen.
Accreditation can get suspendedor revoked at any time.
So cases of fraud can lead toimmediate revocation and
potentially refusal of service.
So we can just say we're notgoing to do business with with
(04:31):
you anymore.
But that has to be approved bythe ATG and then below that
you've got the staff.
So AASHTO and AASHTO resourceokay, so we cover it.
Aashto resource you know a lotof people when I get to speak at
different places.
We talk just about theaccreditation program and how
that works.
But it actually resource.
We provide proficiency samplesand assessments that cover all
(04:53):
those materials.
And then there's anothercomponent.
Obviously there are some prettyimportant materials not shown
up there, like concrete masonrythat is covered by CCRL.
Now CCRL and AASHTO Resourceshare the same building where we
work together very closely.
(05:14):
The AASHTO accreditationprogram accredits based on CCRL
assessment reports andproficiency sample reports, but
they are owned and operated byASTM.
So ASTO and ASTM work veryclosely together in this regard,
even though they don't alwayswork as closely together on
standards development.
(05:38):
Now today, since this is thequality assurance workshop, I've
geared this a little bit moretowards just quality concepts
than how to maintain youraccreditation.
So one thing we worked on lastyear was developing this
strategic roadmap, and it's asubset of AASHTO's strategic
(05:58):
plan, but it lays out a slightlydifferent vision than what
we've had in the past.
It lays out a slightly differentvision than what we've had in
the past, and what we're tryingto do is really get people to
think more about quality andless about just resolving
nonconformities and adhering tostandards.
So there's a lot of I think alot of times people get lost in
(06:23):
the minutiae and we get lost inthe minutia as well.
So what we'd like to do is kindof get people heading in the
direction of improving qualityoverall instead of just being
buried in details constantly.
Not that we're going tocompletely ignore details, but
we're going to focus more onquality improvements.
And we also have a factor ofthis mission where we want more
(06:43):
collaboration with keystakeholders than we've had in
the past.
So I talked a lot about what wedo with the, with AASHTO
members, but there are a lot ofother people who use the
accreditation program as well,like building departments and
counties and cities and the FAA,the Corps of Engineers.
There are a ton of entitiesthat use the AASHTO
(07:07):
accreditation program.
We need to make sure we'redelivering for them too.
Speaker 1 (07:15):
Okay, so part of this
strategic roadmap.
Speaker 2 (07:16):
It has a bunch of
objectives.
I'm going to talk about one ofthem, and I alluded to this
earlier, talking about continualimprovement.
Speaker 1 (07:26):
And there are a few
processes associated with this
improvement.
Speaker 2 (07:29):
You know this
continual improvement process
and that's developing measurableaction plans and objectives and
shifting-related objectivesthat you've determined on your
own, that you evaluate everyyear and have benchmarks and
(07:53):
other goals that are outside ofjust your normal operation,
other part of your normaloperations.
But you've established goalsassociated with quality.
I don't know how loud them andthat's okay, but let's talk
(08:14):
about if you don't have them.
So if you're developingmeasurable action plans and
quality objectives, there's somethings you have to think about.
You have to ask yourself how dowe know we're doing well?
So you have to figure out whatare you going to measure?
Are there any key performanceindicators we can use?
(08:35):
If we have any, what are thebaselines?
Where are we starting?
And then, do we need anyresources to get started to
collect this data and figure outhow to get better?
Well, lucky for you, if you'refamiliar with us, we do have
some built into the program, soyou can use the AASHTO resource
(08:58):
and CCRL assessment reports asperformance indicators and you
can use the proficiency sampledata as a performance indicator.
The proficiency samples are alot more straightforward and
I'll get into that.
The assessment reports are notso straightforward.
It is anybody using Assessmentreports as a performance
(09:18):
indicator, like do you havegoals?
I don't want any more than this.
This many non-conformitiesduring assessment.
Or I want to improve our amountof nonconformities from if
you're that laboratory Imentioned before maybe from 160
to hopefully 80 or less.
Way less would be great.
Has anybody used that before?
(09:40):
You have a little bit.
Yeah, it's really tricky.
I want to explain some of thereasons.
I don't know that.
I would want to have aconversation with you if you're
using the amount ofnon-conformities, because there
are some variabilities that areinherent with the assessment
process and I'm sure those ofyou who have been through the
(10:03):
assessment program a few timeshave seen it right.
A lot of it is based on whatthat person saw.
That's a few days that theywere there in your laboratory.
Equipment can break down.
Weird things can happen.
The assessors might havedifferent experience than you
expect.
They might know a lot aboutaggregate and really not a whole
(10:24):
lot about concrete, or a lotabout asphalt and not a lot
about emulsions, and then thatcan kind of shape how that
report comes out and you have tobe prepared for that.
So I think there's a bit toomuch variability with the
assessments to have some reallyrealistic quality objectives
(10:44):
tied to number ofnon-conformities.
Now I'm just I'm kind of tryingto be honest with you about our
variability that we have, butdon't discount your own
variability as well.
As a laboratory, there's thattoo, but you would know what
that is better than I would.
All right, so let's talk aboutthe assessment reports.
(11:06):
I'm going to talk about theprocess a little bit.
So what people typically doafter an assessment right, they
get the assessment report, theyaddress all the nonconformities
and they just hope and pray thatwe accept their corrective
actions.
That's not really where we wantto be right.
(11:27):
What we want to do is we wantto get the assessment report, we
want to start thinking aboutthose nonconformities more,
analyze trends and I will getinto that Address systemic
issues.
So this is that shift from justnot just conformance to
continual improvement.
(11:47):
You have to think about thosesystemic issues that you're
seeing.
And then you do have to reportyour corrective actions, and
this is another step.
A lot of people don't domonitor effectiveness of your
corrective actions, so it's notenough just to get it accepted
by us.
You have to think about whatyou've done.
Are there ways you can makesure that you've kept up with it
(12:08):
so that the next time you getan assessment or an internal
audit, that that same issuedoesn't come up again.
Okay, so let's look at a coupleexamples.
So these are all probablyfamiliar nonconformities to you.
(12:28):
So competency evaluations notperformed by the deadline,
equipment not standardized orcalibrated by the deadline,
internal audit not completed bythe deadline, management review
not completed by the deadline ornot at all I think it's
probably more common.
So you see some trends there,right?
(12:50):
So what we normally get in thatfirst example is people will
say okay, here's my copy filesfor my technicians, here's the
equipment standardizations wedidn't have, here's the internal
audit, here's the managementreview they never think about
okay, well, why can't we getanything done on time?
What's going on?
Do we not have enough staff?
Yes, that's true.
(13:14):
If you're a DOT, that'sprobably true.
Right, you don't have thepeople you need all the time to
get the work done.
Do we not have a system inplace to keep track of these
deadlines?
Do we have an outdated systemthat just isn't working for us
and we just haven't been able tofigure out a replacement yet?
So there's a lot of things youneed to think about with your
processes instead of justcontinuing to put these band-aid
(13:35):
fixes in place, because you'renever going to continually
improve until you address thosetrends.
Monitoring is another reallychallenging thing.
We struggle with this as well.
So when we get audited andreally our internal audits are
the most painful.
So one of the problems of beingreally good at auditing is you
(13:58):
have really good internalauditors and they just destroy
you.
So when we go through aninternal audit, we usually get
some kind of nonconformities anda lot of really thorough follow
through, um by our qualitymanager, and it does make us
better and it's been really good, but, um, it definitely puts us
in.
It gives us perspective of ourcustomers, which is also
(14:19):
important.
Um, but monitoring is hard.
I just want to give you sometips for this.
Set up appointments in whateveryour calendar.
We use Outlook, so I'd say thebest thing.
Internal audits people reallystruggle with getting those done
on time.
You really need twoappointments for internal audits
(14:39):
.
You need the one to prepare forit and you need the one to
conduct it, and you really needone to conduct it, and you
really need one with your bossincluded to attend the meeting
where you go over it, so theyknow what is going on.
That gives you a chance to notjust get what you hopefully get
what you need.
Let them know what's reallygoing on in your lab.
Tell them some successes, tellthem some struggles you have.
(15:02):
It'll help you get better.
Conduct effective internalaudits.
Work on that.
Think about your process forthat.
Try to make some improvementsthere.
Improve management reviews.
We've done podcast episodes,webinars, tons of different
(15:23):
sessions that are technicalexchange on management reviews,
and people still don't reallyunderstand them.
So I'd say, if you struggle,I'd say a lot of DOTs,
understand these.
If you are new to our program,you might not know what this
term is, though, and I wouldencourage you to go to our
(15:43):
website and try to find moreinformation on management
reviews, but basically, it kindof closes the loop and gets
management more engaged with thequality systems and what you're
going through as a laboratoryScheduled updates for your
policies, procedures and forms.
So this is another thing.
A lot of times, people won'tever look back at what they have
(16:05):
to see.
If it's up to date, they'resatisfied that the
non-conformities are resolved orthey didn't get a finding on
the next audit, but they don'tgo back and look okay, are we
still doing this?
This procedure says that I haveto fill out this form.
We haven't used that form in 10years.
Why does it still say that?
So go back and look at thoseand put them in your calendar.
(16:27):
Review this every three years,but just put a date far out and
make sure you get that reviewedand see if it's still relevant.
Make sure staff knows where itis.
Make sure people are trained onit.
You'd be surprised how quicklythings go out of date, even if
you've got a.
You know you've had a system inplace for a while.
(16:47):
Things change.
Technology changes, people'sunderstanding changes.
People find shortcuts.
Instead of saying that shortcutis wrong because it's not in
the policy, change the policy.
Put the shortcut in there.
It works great.
Don't be tied to these thingsthat have been okay before.
You want to be not just okay,you want to get better over time
(17:10):
, okay.
So I did mention this a littlebit earlier establishing quality
objectives based on anassessment report and I gave you
a little bit of the pitfallsthat could come with doing that.
There are ways to do it.
I would say look atimprovements rather than a
(17:30):
strict number.
Years ago I have sat inclose-out meetings with
laboratories where the managershad a really unrealistic
objective of zeronon-conformities and when they
didn't get that, they werereally mad at their staff and
that was not productive and itwas not useful and it was not
(17:53):
realistic and it made everybodyjust not care about anything.
They were just mad at eachother and it made it hard for
them to get better.
So think about reasonablequality objectives.
If you ever want to talk aboutthat with us, we're happy to
have a meeting and look at whatyou're dealing with now and come
(18:14):
up with some goals for nexttime.
Make sure you use us and it'snot just the DOTs, but
especially the DOTs, since wereally do work for you.
Take advantage of us more.
I think a lot of people don'tpick up the phone or send an
email and say, hey, what do youthink about this?
We're happy to help with that.
We have over 2,000 labs in theprogram.
(18:35):
We've seen it all and we cankind of give you some ideas
about best practices.
But you know that's not justfor the DOTs, that's for
everybody.
So I would be leery about usingthe number of assessments or
number of findings.
But okay, let's get intoproficient sample data.
(18:57):
Way more objective, way morestraightforward.
This is an example of aproficiency sample report.
This is on the hot mixterritory.
One thing I want to point outis one thing that makes our
proficiency sample data areliable objective source of
data is the number ofparticipants we have got.
So if you look at that linenumber three, well, let's look
(19:19):
at line number two.
That's the maximum specificgravity, or RICE 886 data points
are involved in the analysis.
We use the average and standarddeviations to determine the
ratings.
When it comes to statistics,the amount of numbers is really
telling for how reliable thatdata is going to be.
(19:41):
So if you've ever looked at aresearch report at TRB, for
example, or some other researchprogram, where they have six
data points, I'd be very curiousto see if that is reliable data
and if whatever they figure outis their conclusion is going to
(20:01):
actually be a conclusion yousee in real life.
But when you see 800, some datapoints, then that gets you a
better idea.
Like hey, this is probablyreally happening.
So what we do is we provide youthe average, the 1s, percent,
1s so you get your standarddeviations and the ratings are
(20:24):
issued based on your standarddeviations, and I'll show you
that in a second as well.
So ratings of 5, people getconfused about this too.
If this is just basic, I'llmove quickly.
But the data basically fallsout on the bell curve and when
(20:44):
you're within one standarddeviation you'll get a rating of
5.
And if your result is above theaverage it's a positive number,
if it's below the average it'sa negative number.
But a 5 and a negative 5 arestill great.
Then after that first standarddeviation it goes to half
standard deviations.
So 4 and negative.
(21:05):
4 is 1 and a half Within 2 is 3and negative 3.
Once you get beyond that you'rekind of in the risk area.
So that is considered a lowrating.
As far as the AASHTOaccreditation program is
concerned and the proficiencysample program is concerned,
what we do with that is we sayyou know, this is where you need
(21:27):
to take corrective action.
However, we don't suspendaccreditation based on a 2 and a
negative 2.
We give laboratories anopportunity to correct the issue
at that point.
We give laboratories anopportunity to correct the issue
at that point.
But when you get beyond threestandard deviations with a 1 and
negative 1, 0, you're reallyfar from the average and it's
(21:54):
kind of hard to get intosuspension because you have to
do this on both samples of thepair in two consecutive rounds.
So that's four times that youwould get a zero, a one or
negative one.
So that shows us that you'renot making improvements.
You're not doing anything toimprove.
Some people, even after they getsuspended, they will order a
(22:16):
blind sample and they will failagain and say what are you doing
?
And they say well, we thoughtwe'd get satisfactory ratings
this time.
Why?
Why did you think you weregoing to get satisfactory
ratings?
You've done nothing to improve.
So you really have to thinkabout why am I getting these low
ratings?
We can't know.
That's another thing.
We can't know what happened atyour laboratory for you to get
(22:40):
the low ratings.
We can give you an idea ofthings to think about if you're
really not sure, because we haveseen a lot of things, but only
you can know what happened.
Here's an example of alaboratory that got a negative
one and a negative two.
So those are not great ratings.
They're none of you.
By the way, I pulled somebodyfrom far away so I made sure we
(23:02):
didn't get anybody in this roomin trouble, and so we provide
that data.
We also provide these unitdiagrams so it'll show you where
your laboratory falls on thescale.
So that's all of the data.
And then that red dot is thatlaboratory.
So you see, they're kind of onthe outside.
That oval is the.
(23:23):
If you're beyond there, you'rein the zero range and that's not
where you want to be.
But you see a lot of datapoints beyond there.
So there's some strange thingsgoing on out there.
Another thing and I don't thinkpeople look at this enough is
the performance charts.
So these will tell you howyou're doing over time.
(23:44):
And what you really want to seeis a trend line that kind of
hangs around that red line, thezero line so that's the perfect
average is that red line and theround data point is the first
or the odd sample and the squareone is the even sample.
(24:07):
So we always send a sample pairso you can see how repeatable
your results are when you seethat spread between the circle
and the square on each datapoint, and then you can see how
you're doing from one year tothe other by watching that trend
line.
So if this is an example of areally inconsistent testing lab,
(24:29):
they're.
They're very consistent in thatthey they're odd and even
samples are pretty closetogether except for that one,
but they're all over the placewhen it comes to the averages.
So something's going on.
There's some erratic behaviorgoing on here.
So it's either uh, it'sprobably a people problem.
(24:50):
You know, somebody notfollowing the standard.
I usually, when it's equipmentlike, people usually want to
blame the equipment, but whenyou see an equipment problem,
you see a line that goes likedrifting over time, slowly,
either drifting down or driftingup.
When you see this, it's usuallysomebody's mishandling samples,
not following procedures, nottraining.
(25:11):
Those are the kind of thingsyou want to look at, but these
are things that you can use toset performance goals for your
laboratory in an objective way.
So, like I said, this is a waymore straightforward method of
(25:32):
setting quality objectives.
So we give you the ratings.
You can use those as your keyperformance indicators.
We already know what thebaseline is.
You don't have to guess.
So you want to get a three orbetter, but then you want to
think is that good enough foryou?
Let's say you're getting threesand you say, well, that's good,
(25:52):
we're in compliance.
But is that where you want tobe as a laboratory?
Do you want to be slightlybetter than average laboratory?
Maybe you do.
I'd say most people don't.
Nobody gets out of bed in themorning and says I want to be
really just slightly better thanaverage today.
Right, maybe you do.
I hope not.
It'd be hard to get out of bedlike that.
(26:15):
So let's talk about some onesyou can actually follow up with
on here.
So think about these things.
What should your goals be?
When do you want to reach them?
How do you want to get there?
All right, and let's use thatperformance chart.
So if I'm in that laboratoryand I'm asking myself those
questions, I say, okay, well,obviously I did really well the
(26:37):
previous time.
This time I did pretty badly,so I want to think about it
before I get my next sample.
Let's identify at least oneimprovement opportunity that I
can make in my laboratory beforethose samples come in the door.
So if I can set that goal andsit down with my team and say
what can we do, guys like, whatimprovement can we make before
(27:00):
we test this next one, I shoulddefinitely be auditing my
technician on sample handling.
These are things people alsodon't think about sometimes.
Sample handling, not justtesting, but what happens when
that box comes in the door?
Where does it go?
How does it get logged?
How do I make sure?
Think about sometimes samplehandling, not just testing, uh,
but what happens when that boxcomes in the door?
Where does it go?
How does it get logged?
How do I make sure that it getstested on time?
Uh, what do they do with thatto get it out of the box?
(27:24):
Where do they store it?
Do they split it?
What, like?
What's the process that's notwritten down somewhere, uh, that
they might handle or they mighthandle it with?
And then they also probablyhave something to use for the
calculations and the reporting.
Let's go over all that stuff tomake sure we're doing things
(27:45):
properly.
And if I'm in this laboratory,I'm probably not going to shoot
for the moon here and I'llprobably go for a four.
I might say, listen, you know,like we're all over the place,
we just need to stabilize.
So maybe a three or four isokay this time, but let's try to
, let's just do better, right,continual improvement and then
(28:09):
after that.
Well, actually this laboratoryhas done pretty well with
repeatability so well.
I won't go back all the way,but we give a repeatability
rating too.
It kind of tells you how faraway the two samples were.
So in this case the laboratorywas at a 5RD.
I would expect to continue thattrend and then I would also
(28:30):
want to review the ratings withmy technicians.
I might have not done thatbefore, but this time I want to
do it.
I want to close the loop onthis.
So these are just some simplethings that everybody can do to
improve and figure out someobjectives and follow through on
them.
So then, for monitoring, I wantto think forward, like, let's
(28:52):
think ahead to 2026.
Okay, if we achieved our goalfour, now it's time to go for
the five.
So let's see if we can improvethat.
And then, once we go over allthese steps again, then we can
be with the technician and or bewith management, put it in our
internal audit, put themanagement review, tell
everybody what a great job we'redoing, and then we feel good
about that, right, uh?
(29:14):
So when I I talked a look, justa very I touched on the roadmap
that we came up with on thistopic.
There is a lot more in there.
It's a one pager, but there's alot of good stuff in there.
So if you're curious aboutwhat's on there, it's on our
website.
But here's there's a q QR codehere if anybody wants to know
more.
And then I also want to tellyou that we have the Astro
(29:39):
Resource Technical Exchangecoming up here in very far from
here, but it's in Bellevue,washington, march 17th to the
20th.
The way this technical exchangehas anybody been to the
technical?
exchange in here.
Yeah, I know a couple peoplehave been here.
This is a great opportunity tomeet with other people all
(30:00):
around the country who work inour industry lab managers, lab
technicians, equipmentmanufacturers, limbs I don't
know what you call them like ITsystems development, software
engineers, people who work justin the materials testing
(30:22):
industry.
We usually get about 250 peopleshowing up to this and there's
a lot of time to communicatewith them in between breaks at
different events.
In the evenings we have a lotof panel discussions, workshoppy
kind of activities, and it's alot of fun.
It's probably actually not toodissimilar from what you guys do
here, except that we usuallyget people from all over the
(30:45):
place instead of it beingregional.
But I think it's worked outreally well.
We've done it quite a few yearsand we're expecting another
good show here in WashingtonState.
I also want to tell you aboutthe podcast.
I do host this podcast.
We just wrapped up our fifthseason.
We've got over 100 episodes.
(31:06):
We cover all kinds of differenttopics related to what we cover
in a lot of things at AstroResource and CCRL.
If you're ever curious aboutlooking for extra information,
that's a good source too, and ifyou have questions, you can
reach out to me at any time.
But that's all I have.
Does anybody have any questions?
(31:27):
Yes, I was curious.
Speaker 3 (31:33):
You talked about the
efficiency sample program and
how the you know you've got agood sample size on the number
of participants in the program.
I was wondering do you guysever look like year to year what
the like on an absolute basis,what the standard deviation is
doing and trending Like?
Do you see any trends of labsnationally are getting tighter
(31:54):
in certain tests or they'regetting wider?
Do you know what I mean?
Speaker 2 (31:57):
I know I know you got
so many year to year.
Yeah, I would say over time ithas gotten tighter and a lot, a
lot of tests.
Actually, you can kind of seethat if you look at the
standards in the committee onmaterials and pavements, because
a lot of times they'll use theproficiency sample data to
establish the precision and biasstatements.
(32:18):
So you'll see an old standardmight have a wider range of
acceptance than a newer standardmight have, and some of those
are through an inter-laboratorystudy but some of them are just
using proficiency samples.
But it's tough.
We do get that.
A question related to that quitea bit why don't you just use
the precision and bias statementinstead of the average?
(32:39):
It's like, well, if that was anold P&B statement based on data
that was maybe unknown, likesometimes we don't know where
the data came from.
Do you really want to use thatfor your quality objectives or
do you?
You know that might be fine for, like, material acceptance, but
when we're talking about makingquality improvements, that's a
different topic.
But yeah, I think it is yes.
Speaker 4 (33:05):
I have a question
about the proficiency sample.
We received two samples foreach type of test, so are those
samples identical?
Based on that, may you find arepeatability within the lab
data.
If yes, then how do we find thestandard deviations for the lab
(33:26):
repeatability data?
Do we include all the kind ofsample A and sample B
information together and thencome up with that?
Speaker 2 (33:36):
Yes, yeah, and
actually the equation is on our
website If you want to determineit, just for your own
reassurance.
Well, I guess, to answer yourfirst question, they're not
always the same, so sample A andsample B might be slightly
different, but even though theyare, it's like a comparison of
(33:59):
your results to the standarddeviation and then a comparison
of those differences.
It's how you figure out thatinterlaboratory repeatability.
But the specific equation is onthe website so you can figure
out exactly what it is if you'recurious.
Equation is on the website soyou can figure out exactly what
it is if you're curious.
So even though there's adifference, it's factored into
the equation.
So I guess the short answer tothat.
Speaker 5 (34:25):
Yes.
So on our last couple ofassessments we have received
observations on our customersatisfaction surveys.
Based off the first observation, we did a bunch of things.
Still didn't get a whole lot ofresponses back for customer
satisfaction do you guys haveanything on your website or
(34:47):
anything to guidance on how toget better or get more people to
respond to customersatisfaction survey.
Speaker 2 (34:57):
Are you okay?
Let me ask you this when you'retalking about the customer
satisfaction survey, are you,can you tell me?
Who are those going to?
Speaker 5 (35:08):
So at the first time,
when you know we got an, an
observation, we tried to improveit.
So basically we did like anemail blast to everybody that
was in our e-cams anyway.
So the producer and theproducer that's getting there
with an email address, they allgot an email.
I forget what the response ratewas, but it was very low I
(35:32):
believe we also put a link atthe bottom of the signature in
our emails that anybody can doit Again.
The response rate is very high.
So every time you guys go inand do an assessment, it always
comes up as an area forimprovement, but we're having a
hard time figuring out how toassist them in getting back to
(35:56):
some response.
Speaker 2 (35:58):
So you're trying to
collect feedback to improve your
processes, but you'resoliciting the producers.
Speaker 5 (36:06):
Our customers, your
customers, you're not getting
customer feedback.
Speaker 2 (36:10):
So you're trying to
collect it and make improvements
on your own right, but you'renot getting feedback.
So does that indicate to youguys that they're satisfied?
Speaker 5 (36:24):
yeah, I mean, that's
what you know.
When they come in, they assessit and that's my response.
It's like I get a lot offeedback so they get satisfied
and just ignore it.
You know even when I go intothe garage to get my car
repaired on ten minutes later Iget a survey.
You know, do I do it?
You know, sometimes I do itsometimes I don't I think it's
just a big issue in general thatpeople don't want to do these
(36:48):
surveys.
Speaker 2 (36:50):
Yeah, I think you're
right.
I think as a country, the wholepopulation is suffering from
fatigue from all these surveys.
And make it register yourself,like the Radio Shack methodology
of giving them all yourpersonal information about
(37:10):
batteries is somehow one, andnow everybody's doing that and
we just get exhausted by it.
So you're probably right isyour customers are probably
fatigued.
I know we we, we wear peopleout with surveys.
We send so many surveys.
What do you think about this,what do you think about that?
And and uh, we do get someresponses, actually we get a
(37:32):
decent response rate, but it iskind of difficult.
You don't want to overreact tosome of it because sometimes
people like you don't get thoseaverage people who are just
satisfied they're not going torespond.
You get the people who aresuper excited because it
(37:52):
probably went better than itshould have for them, and you
get the people who are justyou're really mad because
something didn't go their way.
So, like we, the way we dealwith it, we, we get that data
and we treat it I don't thinkwe're doing it the right way, by
the way Like we treat it like acomplaint.
But I'd say like it's not acomplaint unless it's
unsolicited Cause.
(38:13):
A lot of times when we followup on the people who send
negative feedback, I'll say oh,you know, I looked into this and
their response is usually oh, Ididn't know you were going to
do that.
I didn't mean to cause anytrouble.
I wouldn't have said anythingif you weren't pestering me to
give this feedback and I'd say,okay, well, I feel good that we
made an improvement either way.
(38:33):
Give this feedback and say,okay, well, I feel good that we
made an improvement either way.
But I also feel like I wasted alot of time worrying about this
customer who wasn't actuallyupset.
Uh, so I I'd say, like in theto answer your question about
that, you're doing what you cando.
Like I would say, try differentmethodologies, but if you're
just not getting anything,you're you're doing your best,
to your best to reach out tothem and get feedback.
(38:55):
So you might want to try somedifferent things.
You might want to do maybe lessfrequent cold calls or
something, and say, hey, I justwanted to follow up with you on
this after a project, or that'sone thing that we found to be.
A better way to get feedback isif we target an event found to
(39:17):
be.
A better way to get feedback isif we target an event.
So after somebody just getsthrough an accreditation process
we'll send them a solicitationfor feedback.
That has been way moresuccessful than doing it based
on a calendar.
Like every january we send itout.
We had horrible results withthat.
But if it's right aftersomething they usually are more
inclined to do it.
But when I say more inclined,that we may be going from like
(39:38):
two percent to 25, like 25 on asurvey, I feel like it's pretty
good.
So if you're getting anywherenear that, I'd say that's all
right.
I don't know, does anybody elsehave any input?
That's a that's kind of a toughone, kathy.
Are you getting so Kathy'sgoing to go up?
Am I eating into your time?
(39:59):
Okay, I think we have a fewmore minutes.
Any other questions?
Yes, you mentioned the lab.
Speaker 4 (40:07):
That had over 100
nonconformity tests.
Yeah, Since you don't actuallyaccredit like a blanket
accreditation.
How many different tests?
Speaker 5 (40:16):
did you?
For how many different testsdid you pull their accreditation
?
Speaker 2 (40:18):
I'm guessing it was
probably all of them, it was all
of them.
Yeah, how many was that it was?
It was how many tests, I don'tknow, maybe like 60 tests,
something like that.
You know, the typicalcommercial testing lab is doing
soil, aggregate, concreteasonry,asphalt mixtures, and it may be
like six to ten tests in eachscope and then some quality
(40:42):
management system ones, somestandards.
But that particular laboratoryit was, it was just like they
weren't doing anything inbetween assessments.
And when we see that, so likejust for when we suspend the
entire accreditation, it'susually because there's an
(41:04):
underlying problem or somethingthat impacts the conformance to
AASHTO R18.
So AASHTO R18 is kind of likethe bedrock of the whole program
and if they're out ofconformance with that, it can
affect the entire accreditation.
They may be doing a great jobwith their concrete cylinders.
That's usually what they're notdoing a great job with.
By the way, they could be doinga great job with that, but
(41:27):
they'd still be suspended ifthey weren't keeping up with all
their quality management systemobligations.
Thank you, you're welcome, allright.
Well, thank you for your timeand attention and for hanging in
there, but yeah, thank you.
Speaker 1 (41:47):
Thank you, Thanks for
listening to AASHTO Resource
Q&A.
If you'd like to be a guest orjust submit a question, send us
an email at podcast ataashtoresourceorg, or call Brian
at 240-436-4820.
For other news and relatedcontent, check out AASHTO
(42:07):
Resources social media accountsor go to aashtoresourceorg.