All Episodes

February 7, 2025 40 mins

In this new #StudyFinds episode of #ProveItPod, Dr. Matt Law has a meltdown over the dirty C-word and proceeds to burn all the bridges with malice and forethought. He didn't even hit the alarm. Grab your fire extinguisher, and listen now!

Episode Resources:

Study explores complacency during workplace fire evacuations. (2025). Professional Safety, 70(1), 13.

Accessible at www.assp.org with a valid membership.

Occupant complacency in workplace fire evacuations

Gold, D., Thomas, D., Vincer, N., & Pitkin, M. (2024). Occupant complacency in workplace fire evacuations. Humanities & Social Sciences Communications, 11(1), 1134–15. https://doi.org/10.1057/s41599-024-03665-3

 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
Okay, folks,
I'm just going to throw this disclaimer
in here right at the beginning.
Few topics get me heated,
and I am on fire today.
Get ready, folks.
It's an all-new Prove It To Me.

(00:47):
Hello, everyone,
and welcome to Prove It To Me.
I'm your host, Dr.
Matt Law.
If I was indeed on a consistent release
schedule at this point,
you could argue that this episode is a
week late.
I have to be honest,
I started writing this a month ago,
and I had to walk away from it for a
bit.
This was a tough one for me to get

(01:07):
through.
This is about as controversial as I
have dared to get so far.
I'm going to create division.
You are either going to love what I
have to say,
or you will hate me with every fiber of
your being.
At the end of this episode,
you will either be a forever follower
of the podcast,
or you are going to block me on all
platforms.
I might even face some retaliation from

(01:29):
the authors of the study I'm about to
critique.
That's okay.
I'm ready for it.
I probably deserve it.
In the end,
I came back to finish this episode and
get it released just because I
personally feel it's important that we
look at these things thoroughly so we
don't find ourselves moving in the
wrong direction as researchers and as

(01:50):
environmental health and safety
professionals.
So let's get into it.
I want to start today by saying that I
am not a philosopher,
especially when it comes to workplace
safety.
There are self-proclaimed safety
philosophers,
including my buddy Professor Wyatt
Bradbury, seriously,
it's in his LinkedIn profile headline,
I can tell you, that's not me.
Now I'm not gonna rag on philosophy,

(02:11):
there's a place for it.
I'll even say there's a need for it.
Occasionally,
if you get a few Bourbons in me,
I'll even dabble in it.
I might even say some things that sound
super highly educated and thought
provoking.
So ahead of you finding yourself in a
situation with me when I start engaging
in philosophical discussions,
I'm going to tell you that in that
situation,

(02:31):
I am probably not actually correct.
Even if I sound correct.
And it's probably time for me to go to
bed.
Just send me to bed.
Professionally, for me,
I do not have time for philosophy.
I'm a quantitative guy.
I spend my time in data.
My time is consumed with measuring
concepts with tangible numbers that

(02:53):
help me analyze relationships between
real variables.
I don't have the desire to make time
for philosophy.
My perspective, right, wrong,
or indifferent,
is that the occupational safety and
health profession is flooded with
philosophy.
Everywhere you turn,
it seems somebody has a big idea that
challenges the way we do things.

(03:14):
And honestly, that's great.
Our workers are still getting hurt,
we are still experiencing massive
losses,
and we are still seeing environmental
disasters.
We should be challenging the way we do
things.
So often, though,
these big ideas are founded in little
more than petty contrarianism rather
than replicable evidence.

(03:35):
My favorite things are when we rename
key concepts in the name of innovation
with absolutely no data to prove that
renaming something will actually change
any kind of outcome.
Have you ever heard someone argue about
whether to call it a near miss or a
near hit?
Nothing makes my blood boil more than
this absolutely useless,

(03:56):
unfounded argument.
Near miss, near hit,
who really gives a shit?
The problem isn't the name.
The problem is that you aren't
capturing the data you need to make any
kind of difference in your outcomes.
When you fix that,
then you can come talk to me about what
you want to call it.
In fact, folks,
you should know by now that part of the

(04:17):
whole reason for this podcast is to
challenge these big philosophical ideas
by highlighting the real data and
evidence that we get from good
research.
You could say that there is no room for
philosophy in this podcast because
we're too busy talking about the real
stuff, not theory or rhetoric.
It might even sound insane to you at
this point that I would even broach the
topic of philosophy on a podcast that

(04:38):
claims to bring you real research,
real data, and no bullshit.
Nevertheless,
I have to tell you that today's study
finds got me particularly heated
because in its design,
it directly conflicts with a core
philosophy that I totally buy into.
I am letting you know this at the
beginning because this philosophical
concept that I buy into does create

(05:00):
bias in how I will critique this study.
I'm going to eviscerate the
construction of this study from
beginning to end with malice and
forethought simply because its intent
and its design is so contradictory to
my firmly held beliefs and how I
approach workplace safety and I am
genuinely worried about the perceptions

(05:21):
that this study may create should it be
acquired and adopted by the general
population.
I could be wrong,
but I really don't think I'm wrong and
therein lies my bias.
So here's what I want to do.
I want to go over the summary of the
summary,
then I want to explain to you the root
of my bias,
and then if you're still with me after
I get unreasonably tangential about my

(05:43):
deeply held beliefs,
then you can follow along with me as I
rip apart this study limb from limb.
If you're a subscriber to the
Professional Safety Journal,
you may be familiar with the fact that
they have several different types of
articles every month.
There are usually a handful of peer
-reviewed articles,
but then there are also updates about
the American Society of Safety

(06:04):
Professionals,
brief synopses about OSHA initiatives
and standards updates,
and some news that the editors pull
from other locations.
Today's study finds comes from the
Safety Matters section of PSJ's January
2025 issue.
The headline reads,
study explores complacency during

(06:26):
workplace fire evacuations.
In a few short paragraphs,
the editors summarized the study,
which explored occupant complacency
during evacuations and suggested that
the strong management support for fire
safety,
the integration of a fire safety
culture,
and having trained fire wardens can
help prevent occupant complacency.

(06:47):
All good stuff, right?
So maybe you're thinking,
what the hell is your problem, Dr.
Law?
What beef could you possibly have with
this study?
There's this word that makes me want to
vomit every time I see it.
It is a word that I feel is inherently
drenched in everything that is wrong
with what some may consider as the

(07:08):
traditional approach to safety.
To me, it's a vile word.
The word is complacency.
I mentioned my buddy Wyatt earlier,
and he shared with me his thoughts and
work on this,
as well as conversations he had with
Ron Gantt and Ivan Pupilidy.
All three of these guys are highly
intelligent,
and I hold them in high regard.

(07:30):
I don't want to misconstrue any of what
I'm about to say as direct quotation
from any of them,
but my thoughts here are inspired by
them,
as well as my own experience over the
years working in occupational safety
and health.
Complacency is a word that is overused
and abused in our profession.
It is a word that you'll find in the

(07:51):
section of incident investigation
reports prepared by what I would
consider less mature workplace safety
programs.
It is a term that inevitably blames the
victim of a workplace incident for the
situation in which they and they alone
are accused of placing themselves.
The word complacency is a scapegoat and
it blinds observers and investigators

(08:12):
from organizational, situational,
environmental,
and systemic factors that almost always
should be considered as the root cause
or part of the collection of root
causes that cause incidents that
negatively impact people,
property, and the environment.
You see,
complacency as we know it is something
that is natural and could be expected.

(08:33):
In fact, we do expect it to happen.
The Dreyfus mental model for skill
acquisition walks through the stages of
developing any type of skill from
novice to competence to proficiency to
expertise and finally mastery.
If I'm oversimplifying this,
I apologize.
but it's for good reason.
But at every advanced stage of skill

(08:53):
acquisition or development,
less mental power is needed for the
basics.
As you spend less mental or cognitive
power on the basics,
you have room for more complicated
tasks,
more technical parts of the skill,
even reaching mastery,
where the basics naturally happen
without any active dedication of mental
power.
Put another way,

(09:14):
complacency is kind of the goal here.
Yes, we're in theory here.
Maybe sometime I can look to find some
quantitative data from brain scans
during skill development to see if this
has been proven out.
For now,
it's a well-accepted theory that has
persisted for more than four decades.
Now Wyatt gave me an example that makes

(09:34):
a lot of sense to me.
Think about your daily commute.
The first time you took that commute,
you were probably super focused on
where you were driving,
how you were driving,
and all the little details along the
way.
You might've turned the radio down at a
complicated intersection or as you
pulled into the parking lot to conserve
cognitive power.
Yes, that's why you do it.

(09:56):
As the days, weeks, months,
and years went on and you've made that
same commute,
you probably don't think as much about
it anymore.
You may have had days where you've made
that commute and at the end,
you don't remember any of the details
about the commute itself unless you
came across something really different
or jarring,
like you got pulled over or you swerved

(10:16):
around some roadkill or you had to stop
for a bus that's not normally on your
route.
Over time,
you have become complacent to the basic
details of your commute.
Is it wrong?
No, it's natural and it's expected.
We expect this from our workers too.
We expect them to get more efficient

(10:36):
and better at their jobs as they master
the skills it takes to accomplish them.
It is natural on their first day to be
highly aware of every little detail as
they learn what the hell they're
supposed to be doing.
Over time,
they may learn to naturally ignore the
things that have never hurt them
before,
to walk the path they've always walked,

(10:57):
because those were basics from day one.
Mental power is needed elsewhere to
continue to master the work.
So why then,
if this is natural and expected,
should we ever blame them when a new
situation is introduced outside of
their control that disrupts the routine
and they just happen to be caught in

(11:18):
the middle of it because we expected
them to be masters of their work?
Why are we aiming to try to fix
complacency when we should be fixing
the system around the worker to ensure
they can continue to be masters of
their work in a safe manner?
Why would we ask the worker to rob
themselves of the mental power required

(11:39):
to be efficient at their jobs just
because we couldn't fix the
organizational, situational,
environmental,
and systemic factors around them that
are the true causes of the incidents.
All I'm saying is if your root cause
that you've found is complacency,
you're not digging deep enough.

(11:59):
If you're trying to fix complacency,
you're trying to fix something that
cannot be fixed.
And honestly, I hope it's obvious,
we don't want to fix that.
Therein lies my bias approaching this
study.
If you're still with me,
and I hope you are,
just know that I'm going to be
critiquing this article with extra
scrutiny,

(12:20):
because I already disagree with the
apparent intent.
And I realize all of this gets me close
to that realm of being a pundit with an
opinion,
but I really feel very strongly about
this.
So here we go.
Let's dig in.
This article is open access,
so when I include the link in the
episode notes,
you will be able to click it,
read the whole thing, download the PDF,

(12:42):
whatever you want to do without any
special credentials.
The article is titled,
Occupant Complacency in Workplace Fire
Evacuations,
and published back in September 2024 in
Humanities and Social Sciences
Communications.
To start,
I can tell you the authors declared no
competing interests and they did not
receive any financial contributions for

(13:02):
performing the research or writing the
article.
So what really is the intent of this
study?
From the abstract, quote,
this study addresses the hypothesis
that if there is a clear definition of
occupant complacency during workplace
fire evacuations and control measures
are developed, tested and implemented,
the risks of injury and death related
to occupant complacency during

(13:23):
workplace fire evacuations could be
prevented or mitigated, end quote.
I mean,
that's not really how you construct a
hypothesis, but okay.
So the association we're looking for is
between what,
having a clear definition of
complacency as the independent variable
and the ability to implement effective
controls as the dependent variable?

(13:45):
Oh, okay.
So normally when you construct a
hypothesis for quantitative research.
It is the positive response to the
research question that goes something
like this.
Is there an association between X and
Y?
The hypothesis then goes,
there is an association between X and
Y.
Or a different research question,

(14:05):
what is the relationship between X and
Y when controlling for Z?
The hypothesis then is that there is a
relationship between X and Y when
controlling for Z.
You'll also see a null hypothesis,
which basically states that there is no
significant relationship between X and
Y.
If there is an association,
the correct way to state this is that

(14:26):
the study rejects the null hypothesis.
Or if there is not an association,
you say the study failed to reject the
null hypothesis.
The reason for saying it this way is
that without evidence to the contrary,
the null hypothesis is always true,
meaning there is no relationship
between variables unless we have the
data to prove it.
Again,
moving into the introduction section of

(14:47):
the article,
the authors state the paper will
address the hypothesis that, quote,
if occupant complacency during
workplace fire evacuations is
adequately defined and adequate control
measures are developed and implemented,
the risk of death or injury during fire
evacuations can be prevented or
mitigated, end quote.
By wording the hypothesis in this way,

(15:09):
absent of a research question to guide
the hypothesis and null hypothesis,
the authors are already assuming that
there is a relationship,
which arguably could create bias in the
construction of the study.
If you want a relationship to be there,
if you assume the relationship is
already there,
you can absolutely construct your
experiment to prove that the

(15:29):
relationship is there,
whether it factually is or not.
It's a dangerous pitfall in research.
It removes the objectivity.
Believe it or not,
a well constructed study that proves a
relationship between variables does not
exist is just as valuable as proving a
relationship does exist.
You could look at it as a long,
drawn out form of process of

(15:50):
elimination.
Let's move into the research questions
because the authors do state them,
but only after stating their singular
hypothesis, which is weird.
The research question should be the
main questions that drive the design of
the study, the measurement instruments,
the sampling, really everything.
First research question,
is there a clear and adequate
definition of occupant complacency

(16:11):
during workplace fire evacuations?
Yeah,
so no variables or relationships there,
just a yes or no question apparently.
We're just rooting around to define a
problem that we're absolutely sure
exists,
but we don't know what words to put to
it.
Fine.
Second research question,
are workers upon receiving an alarm
complacent in their decision and

(16:32):
behavior to immediately begin movement
towards a safe location as described in
previous literature?
Oh, that's cool.
That makes me think we're going to
observe fire drills and measure the
percentages of people in the building
comply with the alarm and how quickly
they do so.
That's promising.
Let's move on.
Third research question.
What are the behaviors and conditions

(16:52):
that are antecedents to occupant
complacency during workplace fire
evacuations?
I gotta be honest folks,
I don't know what the hell we're doing
here.
Is this qualitative and not
quantitative research now?
Is it mixed methods?
I did a quick search for those terms in
the document.
They don't exist in terms of describing
the study.

(17:13):
Also,
now you're looking to find the precursors
to complacency,
but as indicated by the first research
question,
you may not even have a definition for
complacency.
Are we still constructing the
definition with this question?
Okay finally,
the fourth research question.
What are the strategies to prevent
occupant complacency during workplace

(17:34):
fire evacuations?
I'm not even gonna, yeah, moving on.
So it looks like the authors created a
survey based on these research
questions along with a review,
statistical analysis,
and evaluation of data that originally
came from the Fire Risk Management
Group report from the Institution of
Occupational Safety and Health in 2023.

(17:57):
I'm not going to go through the entire
introduction here.
The rest of this is basically the
literature review that allows the
authors to set up the problem.
Some of this is about the fact that
fires kill people.
I mean,
no one's going to deny that here.
But it is important to restate these
occurrences to show that we are trying
to fix a real problem.
It also turns out that there are quite
a few previous articles that seek to

(18:19):
define complacency and relate it to
other factors,
such as risk perceptions.
Risk perceptions I can get behind.
These are quantifiable.
If you don't believe me,
go read my doctoral research,
which you can find at drmattlaw.com.
There is one paragraph here that goes
through a bunch of different
definitions,
including Merriam-Webster's 1999
collegiate dictionary,

(18:40):
which states complacency is, quote,
self-satisfaction,
especially when accompanied by
unawareness of actual dangers or
deficiencies.
Another article they found claimed that
complacency should be framed as risk
perception and risk tolerance,
given that the underlying psychological
mechanisms of these concepts are better
understood,

(19:00):
and that the link between perceived
risks and accidents is established.
Great!
Why didn't we just go with that?
Nope.
We want to attack complacency still.
Anyway,
moving on to materials and methods.
The authors created an eight-question
survey based on work conducted by
Lipinski in 2021.

(19:21):
Lipinski, by the way,
suggested that complacency occurs when,
quote,
the employee is not taking risks seriously
and takes for granted that an accident
won't happen to them.
The survey was distributed to a
convenient sample of occupational
safety and health and fire safety
professionals through the web,
LinkedIn, and through international,
OSH,
and fire safety related instructions.

(19:42):
Okay, stop.
It all just fell apart for me.
This is probably the absolute worst
thing you could possibly do with this
study.
Safety professionals?
We're not even observing actual
behaviors or testing controls and
interventions.
We're asking safety professionals about
complacency.
You want to talk about response bias?

(20:03):
God bless safety professionals.
I'm one of them,
but there's a reason I just had a
tangent about complacency.
It's because even safety professionals
don't get this right.
I don't know how many videos of unsafe
situations I've seen on LinkedIn,
scroll down to the comments section,
and sure enough,
there's a bunch of so-called safety
professionals talking about how stupid

(20:25):
the guy was that just got hurt.
These are the people you want to ask
about complacency.
The safety profession as a whole hasn't
even come to a consensus on the right
way to do safety.
We've got Safety 1 and Safety 2 and BBS
and HOP and SIFS and arguments about
the statistical validity of TRIR and

(20:45):
debates over technical standards and
all sorts of other bullshit that we
can't agree on.
If we did,
if we actually figured out the right
way to do safety,
we might actually start moving the
needle on serious injuries and
fatalities.
Look, I get it.
The safety profession is relatively
young.
We've got maybe a century of practice
under our belts.

(21:05):
We've got only half of that with any
kind of regulatory standard in the US.
Compared to medical, legal,
and engineering professions that have
been around for thousands of years,
it makes sense that we haven't quite
cracked the code yet to start making
real advances.
But now,
just by handing this survey about
complacency over to a bunch of safety

(21:27):
professionals,
not only are they going to tell you
about their views of complacency,
but you're pointing them in the wrong
direction from the start.
Of course,
they're going to tell you that their
workers are complacent.
Of course,
they're going to give you exactly what
they want to hear with your results.
They're the ones dealing with this
stuff every single day.
They see the worst of it.

(21:48):
They're jaded about it because they
haven't figured out how to fix this
problem that can't actually be fixed.
They're trying to fix the wrong things,
and you're pushing them further towards
that.
Folks, I think I need a co-host.
I need someone to balance me out when
my blood starts boiling like this.
I'm about to turn into a brushed limb
ball or something.

(22:08):
This is ridiculous.
I don't even want to finish reading
through this study at this point,
but I'm going to.
Just let me calm down for a second.
All right.
I finally got to the section where they
list their variables and analyses.
The first thing they want to do is
identify the antecedents to occupant
complacency to try to figure out a
definition that works.

(22:30):
The respondents were given some
options,
but also allowed to provide their own
answers in a free text box.
that actually required statistical
analysis examined relationships between
variables.
Independent variables included
industry, country,
and whether their role had fire safety
responsibilities.
And dependent variables included their

(22:50):
success in strategies to avoid occupant
complacency and whether complacency
plays a role in delaying evacuation.
So what did they find?
This is actually the section that gets
a little complicated to sort out if
you're not careful.
There's the eight question survey that
went out to a convenience sample of
safety professionals and then there's
the FRMG report data set and then

(23:11):
there's the literature review.
So as far as the definition of occupant
complacency goes,
the author solved that one themselves
using the literature review.
The definition is, quote,
a delay in pre-movement time between
the time an evacuation alarm is
activated and when movement is started,

(23:31):
end quote.
And that's what they included in the
survey to guide the question about the
antecedents or precursors to
complacency.
I'm actually a little confused by that.
A delay in pre-movement time?
Is that really complacency?
To me,
that seems more like hesitation or non
-compliance and doesn't really capture

(23:52):
those elements of self-satisfaction or
ignoring the hazards or anything about
risk perceptions or risk tolerance.
Whatever,
I guess we're rolling with it.
I mean, maybe I got mad for no reason.
Maybe we're just tagging this stupid
complacency word onto something that
really isn't what is traditionally
known as complacency and we're really

(24:12):
getting into something else entirely.
The problem is,
even with this definition,
we're still in human error as the main
focus.
And humans do human things, always.
Good luck trying to fix it.
I actually wanna run through the
question about antecedents to this
complacency that they've supposedly
defined now.
The respondents were given four options
to choose from or they could pick other

(24:34):
and type in their own response.
So using the definition,
a delay in pre-movement time between
the time an evacuation alarm is
activated and when movement is started,
what are the driving conditions?
Option one,
occupants ensure that others evacuate
at the same time rather than follow
procedures.
45 respondents use that answer.

(24:56):
Option two,
occupants feel the building is safe and
do not feel the need to move.
35 respondents pick that one.
Option three,
occupants do not feel the risk is high
enough to warrant evacuation.
54 respondents pick that answer.
Option four, occupants are apathetic.
Gross word.

(25:17):
35 respondents use that one.
Now they grouped the other responses
when respondents provided their own
answers into four categories.
Those were one,
ignore alarms/cues to act.
That's seven respondents.
Two, inadequate training or awareness.
two respondents, three,
lack of knowledge regarding procedures,

(25:37):
four respondents there, and four,
false alarm or drill, five respondents.
The authors said they also got a few
positive responses,
like no issues here,
people tend to evacuate immediately,
our folks follow procedures, etc.
Respondents were asked on a scale of
one to ten how often this complacency
thing plays a role in evacuations.

(25:58):
The mean score was 4.42.
By industry,
education slash public services and
fire services responders had the lowest
score, which I guess is the best score,
of 3.95,
and construction had the highest score
of 4.90,
but there was no significant relationship
between industry and the effect of

(26:20):
complacency on evacuation.
Then they were asked on a scale of one
to ten how successful they are in
implementing strategies to avoid
evacuation complacency.
The mean score here was 6.36,
with the highest score,
which is now the best score,
of 7.80 for oil, gas, chemicals,
and mining,

(26:41):
and the lowest score of 5.76 for
construction.
Finally,
there was a free text answer to list up
to three specific strategies to avoid
complacency.
Want to guess what the top one was?
I'll give you a minute.
What's the top answer to fix any safety
incident ever in the history of
mankind?
You might find yourself aligning with 43

(27:02):
.34% of responses here with the top
strategy being, wait for it, training,
education, awareness raising,
and communications.
Wow.
Imagine that.
You want to fix human error?
Train them.
They messed up?
Train them again.
Keep on training till the cows come
home.
Alright, what else do we have here?
Evacuation drills at 20.96%.

(27:25):
Evacuation plans, procedures,
and maps at 11.61%,
fire wardens at 7.65%,
and management role and policy tied
with the enforcement and disciplinary
procedures at 4.53%.
If you're following along,
this is all in Table 5 of the study
article.
To no surprise,

(27:46):
certainly not surprising to me,
very few of these respondents
prioritize testing the fire alarm
system, limiting nuisance alarms,
or housekeeping.
You know,
all of those things that are systemic,
controllable,
and have greater impacts to controlling
human error?
Nah.
Not a priority.
Pfft.
So then this study gets into the FRMG
data.
And I think this is where I've been

(28:07):
confused the whole time.
Again,
this study has several different elements
to it to try to answer all of their
research questions and address their
hypotheses.
Let me try to straighten this out now
while we have a minute.
We had a literature review so the
authors could define complacency.
That literature review also included
the FRMG report,
which again stands for the Fire Risk

(28:28):
Management Group of the Institution
of occupational safety and health. Then we had this survey created by the study authors that went out to a convenient sample of safety and fire risk management professionals to gather information on the antecedents or precursors to complacency.
Then finally they dug into the FRMG
data to try to find additional

(28:49):
significant relationships there and
this data supposedly includes responses
from all types of folks,
not just safety and fire risk
management professionals.
I get the motivation for doing it this
way but the problem is that you can't
make this any clearer than a pile of
mud.
It's a mess.
You can't mix different data sets for
analysis.

(29:09):
They didn't do that but they're
bringing in all of these different
sources of information to try to answer
questions within one study.
It's just messy and I don't like it.
I'm sorry.
So they did a whole bunch of tests here
but I'm going to focus on the
significant findings.
First,
they found a statistically significant
relationship between the number of
hours of emergency evacuation training

(29:31):
over a three-year period and the
perception of concern and commitment of
the company manager about fire drills.
So more training equals I feel that my
manager is committed to fire safety.
Okay.
Second,
a statistically significant relationship
between industry and the amount of
evacuation training.
Energy, engineering,
and waste management had the least

(29:52):
amount of training and chemicals, oil,
and gas had the most amount of
training.
Third,
a statistically significant relationship
between how respondents rated fire
safety on their worksite and how many
hours they spent in evacuation
training.
More hours equals I think my worksite
is safe.
Finally,
older respondents are more likely to

(30:13):
evacuate on the first cue based on self
-reported outcomes.
Now,
I don't normally get into the specifics
of statistics here,
but something stuck out to me.
I don't have the data set in front of
me, obviously,
but the authors state that they used
ANOVA tests.
ANOVA stands for Analysis of Variance,
and there are different types of ANOVAs

(30:33):
and are usually one of the more simple
but solid ways to test correlation when
you have quantitative variables.
However,
a lot of these tests have categorical
variables, like industry or country,
where there's no particular order to
them.
For these types of variables,
especially if they are the independent
variables,
you should be using a regression test,

(30:55):
like multiple linear regression or
multiple logistic regression,
and you'll probably see an ANCOVA in
there, analysis of covariance,
not an ANOVA.
The statisticians can argue with me on
this if needed.
I like to think I know my correlation
tests, but I have been wrong before.
Then we get to this validation section.
This is the section that actually made

(31:17):
me simultaneously facepalm and laugh
out loud.
I'm sorry, it did,
but I have to explain why.
Do you remember the definitions of
reliability and validity that I
provided in the first episode?
If not,
let me explain those very quickly.
Validity is the degree to which a study
accurately measures what it sets out to

(31:37):
measure.
Reliability on the other hand is the
extent to which a study consistently
yields the same results when repeated.
You're going to hear me use the word
instrument here.
When I say instrument,
I'm talking about the thing that is
used to measure.
In this case,
we're talking about the survey.
Every instrument used in research
should be tested for both validity and

(31:58):
reliability.
That's how we know we are getting the
results we set out to measure,
and those results are less likely to be
influenced by other factors such as
various forms of biases and confounding
factors.
Think of it like tuning a guitar,
or timing an engine,
or setting your watch.
If you don't do these things,
your outcomes are off.
Your guitar plays the wrong notes,

(32:18):
your engine misfires,
and you're 15 minutes late for the
important meeting.
Making sure a research instrument is
valid and reliable is crucial to good
research.
So,
especially when you use a quantitative
instrument,
something that gives you a score or a
number as a measurement,
there are quantitative tests you can
use for both validity and reliability.

(32:40):
Let me share a few of these and stick
with me because this is important.
First, there is convergent validity.
Convergent validity tells you how
closely two tests that measure the same
thing are related.
Let's say one test is a survey where
someone tells you how sad they are and
the other test is I observe how sad
they are by watching them.

(33:01):
I do that multiple times with multiple
people and if I have a statistically
significant correlation between the
scores of the two different tests,
then I have strong convergent validity.
You can even do this within the same
instrument if you have two data points
that measure the same thing.
Have you ever taken one of those
personality tests where you seemingly
get asked the same question multiple
times?

(33:23):
That can give us convergent validity
within the same instrument.
Okay, second,
there is discriminant validity.
You can think of this as an opposite to
convergent validity.
Discriminant validity tells you whether
two tests that should not be highly
related to each other are indeed not
related to each other when measured.

(33:43):
For example,
if I gather two different data points,
let's say on a survey I ask for
favorite color as one data point and
for the other I ask if they ate
breakfast this morning,
there should not be a strong
correlation between those two things.
If my test says there is not,
then I have good discriminant validity.
Lastly,
for validity you have criterion validity,

(34:06):
also called criterion related validity.
This is where you determine whether
your test is measuring what it's
supposed to measure based on some kind
of gold standard.
Let's say I'm trying to determine if
college entrance exams predict future
academic performance.
I can use GPA as the gold standard for
measuring academic performance.

(34:27):
If there's a correlation,
then I have criterion validity.
Another example is if I have a widely
accepted survey that measures eating
habits and I shorten that survey by 50%
and it still gives me the same result,
I have criterion validity because the
original survey was the gold standard.
All of these are quantifiable.
You use statistical tests to measure

(34:48):
them.
If you develop a new survey instrument,
it is best to pilot it and test it for
validity and reliability,
which is also quantifiable,
before you use it widely.
In fact,
many institutions won't let you do
research for them unless you use
instruments that have been validated.
So, back to the study, ready for this?
Quote,

(35:08):
a qualitative approach to minimize the
effects of construct,
content and criterion validity was
undertaken for both the survey and
FRMG, end quote.
The authors claim to have designed out
construct validity through, quote,
consultation.
They used fire safety professionals to
critique and influence questions.

(35:29):
They discussed the findings with fire
safety professionals to confirm the
findings were consistent with
expectation.
Are you freaking kidding me?
Let me give you a comparison to what
just happened.
I'm a mechanic and I'm putting wheels
back on a car.
The lug nuts on a wheel have to be
tightened to a specific measurement of
torque so that you don't break the
bolts,

(35:49):
but also that your wheels don't go
flying off on the interstate.
I have seen that happen before.
Now,
I don't have a torque wrench on hand to
measure this,
but I see Jimmy over there in the
corner and Jimmy's been fixing cars for
30 years.
So I say, Jimmy,
do these lug nuts seem tight enough to
you?
Jimmy comes over,
turns the wrench a bit, says,

(36:10):
yeah buddy, I reckon that's good.
That is the validation process used for
this survey.
Look,
at least they were honest about it and
you know in some circles that might
fly,
but I swear if any of you out there
ever find yourself under my supervision
or consultation on a quantitative
study,
you better never try that stuff on me.

(36:32):
I want to be done with this.
I've kept you all subject to my tirade
for too long.
The authors basically conclude that in
order to combat this complacency that
they have so courteously defined for
us,
you probably need a strong fire safety
culture,
which includes engaged leaders and
workers,
you need procedures put into place,

(36:53):
and you need periodic training.
For limitations,
the authors discuss that future
research should reach a broader
audience,
which includes more industries,
geographic regions, language groups,
and people other than safety and fire
risk management professionals.
They also have a paragraph in the
limitations section asking for
authorities to develop legislation

(37:13):
around evacuations,
which seems like the wrong place to put
this, but whatever.
At this point,
you either get what I'm saying and you
might agree with me,
or you don't agree with me.
you hate the way I've evaluated this
and you totally disagree with me.
Either way is fine.
Personally, I did not like this study,
and I think it sets a poor precedent.
At face value, the intent is admirable.

(37:35):
We want to keep people from getting
seriously hurt or killed if or when a
fire was to occur.
I won't dispute that.
I do think this study aims to fix the
wrong problem,
and based on what I just read,
I don't think it even executed that
very well.
If we want to look at behaviors,
I think we need to observe actual

(37:57):
behaviors,
not ask safety professionals to report
on behaviors.
If we do observe behaviors,
we need to not blame humans for doing
human things.
We need to put the organizational,
situational, environmental,
and systemic controls into place that
effectively prevent human error.
And whether you agree or disagree with

(38:18):
me, I would love to hear from you.
send me a note at contact@proveitpod
.com,
leave a review on your podcasting app,
or find me on social media.
Tell me I'm a buffoon.
Expect me to provide more data to argue
the case than I'm not,
but feel free to try your case.
Until next time, I'm Dr.
Matt Law.

(38:38):
This has been another episode of Study
Finds on the Prove It To Me podcast.
Take care and stay safe, everyone.
Prove It To Me is produced by me,

(39:03):
Matt Law, original music by Wes London.
You can find this podcast on Podbean,
Apple Podcasts, Spotify, YouTube,
Amazon Music, and iHeartRadio.
Like what you've heard so far?
Please like, subscribe,
and follow wherever you get your
podcasts,
and leave a five-star review on Apple
Podcasts.
Got questions about what we talked
about or research that you want to
share?

(39:24):
Send an email to contact@proveitpod
.com.
The views and opinions expressed in
this podcast are those of the host and
its guests and do not necessarily
represent the official position,
opinion,
or strategies of their employers or
companies.
Examples of research and data analysis
discussed within this podcast are only
examples.
They should not be utilized in the real

(39:45):
world as the only solution available as
they are based on very limited,
often single-use case,
and sometimes dated information.
Assumptions made within this discussion
about research and data analyses are
not necessarily representative of the
position of the host, the guests,
or their employers or companies.
No part of this podcast may be
reproduced,
stored in a retrieval system,
or transmitted in any form or by any

(40:06):
means, mechanical, electronic,
recording,
or otherwise without prior written
permission of the creator of the
podcast.
The presentation of the content by the
guests does not necessarily constitute
an active endorsement of the content by
the host.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.