Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Thanks for your patience.
(00:01):
Your listenership is very important to
us.
You have been waiting for five hundred
four hours.
You are next in line.
The host of Prove it to me.
We'll be with you in approximately one
minute.
Please stay on the line.
(00:47):
Hello everyone and welcome to Prove it
to me.
I'm your host Dr.
Matt Law.
Yes, I know I'm late again.
If you've listened to this podcast
before,
you know I have a disclaimer that
explains that this podcast is
completely separated from my employment
with any of the entities for whom I
work.
As such,
(01:07):
sometimes my regular employment
requires me to be cut off from the
ability to perform any other work for
anyone else for a certain period of
time.
Now,
that's a nice way of saying I went into
the field and did marine shit as a
naval officer for about a week.
Anyway,
when I do get cut off from everything
else,
I get a little backlogged on the other
things I need slash want to accomplish.
(01:30):
So here we are.
I truly do appreciate your patience.
Now on the last episode,
we looked at scare tactics used in
workplace safety and health training.
We discussed some of the things I
normally bring up in the ethics course
I co-teach and we looked at four
different studies that critically
evaluated the use of scare tactics to
influence behaviors in a variety of
(01:52):
applications.
One of those studies was this weird
critical multimodal discourse analysis
dealing with semi...
meotics,
which I still haven't quite figured out
how to properly evaluate.
The other three however,
quantitatively showed that the scare
tactics used in their study were not
working as intended.
(02:13):
We learned a few key points in part
one,
but one of the most important things
was this extended parallel process
model,
which illustrates the relationship
between scare tactics,
which are also known as fear appeals or
threat appeals, and behavior outcomes.
Importantly though,
this model illustrates that efficacy
makes or breaks that relationship.
(02:35):
Based on this model,
you have to thoroughly scare the shit
out of the person receiving the message
so they feel the threat is severe
enough and they are susceptible enough
to the threat.
Then,
you have to also make sure that efficacy
exists for the person receiving the
message.
Remember,
efficacy requires response efficacy
(02:56):
which means the person receiving the
message believes the recommended
behavior will reduce the threat.
And it also requires self-efficacy,
which means the person believes they
are capable of doing the recommended
behavior.
For example,
let's pick on driver safety because
that's an easy one to pick on.
Let's say I trained a driver and used
(03:16):
all of these gruesome photos and videos
to show them the severity of accidents
when they occur.
Honestly,
I didn't even have to dig real hard to
find those, right?
Then I trained the driver how to drive
safely.
You know, don't speed, stay aware,
don't run red lights,
don't text and drive,
do hands-free calling,
(03:36):
all of that stuff.
Let's assume the driver now feels that
the outcome of an accident is pretty
severe.
They know they are susceptible to an
accident and they know what behaviors
they need to do to avoid it.
Now, cut to six weeks later,
that driver is in Florida where on a 70
mile per hour interstate,
(03:57):
traffic goes at two speeds,
60 miles per hour in the left lane,
and yes, I said the left lane,
or 80 to 90 miles per hour everywhere
else.
The driver no longer feels that 70
miles per hour is a feasible speed,
so they decide to join the 80 to 90
group.
Self-efficacy for not speeding, gone.
(04:18):
Then a text comes in, it's urgent.
The driver feels like they need to
respond.
The driver believes they can send the
text real quick without any negative
impact.
Why?
They've been doing it every day because
the job demands it and so far,
nothing bad has happened.
Also,
no one trained them on how to use hands
-free texting.
(04:40):
Response efficacy, gone.
And without efficacy,
those scare tactics are no longer
affecting the desired behavior.
So, theoretically,
it takes all of these things for scare
tactics to work.
If you ask me,
that's a lot of shit to manage.
while you insist on using your graphic
images and videos in your safety
training.
(05:00):
Oh, and also,
this model explains that fear without
efficacy can actually exacerbate the
problem.
People will adopt fear control
processes resulting in defensive
avoidance.
The outcome you want is danger control
processes,
meaning you want them to be focused on
controlling the hazard rather than
controlling their own fear.
(05:22):
Now remember,
I say theoretically because this is a
model,
and models don't tell us the way things
work.
Models tell us which relationships to
measure to determine if things are
working.
However,
on the last episode we went through
three different studies that each
mentioned this model,
and I'm sure there are certainly more
studies out there that measure against
(05:43):
this model.
By the way,
you realize these researchers are doing
the Lord's work, right?
How many of you out there are just
creating and delivering your safety
trainings and never really measuring
outcomes after the fact?
And I'm not talking about your little
post-training quiz that measures
knowledge retention immediately after
the training.
I'm talking about measuring the stuff
that matters and drawing correlations.
(06:06):
Look,
all I'm saying is that if you're going
to continue using scare tactics,
you better be prepared to measure the
outcomes.
If you don't measure it,
you can't manage it.
Until you do,
you can't swear to me that it works.
I just went through a bunch of research
that quantitatively states that it
probably doesn't.
And if you really wanted it to,
(06:26):
there's some extra steps that I bet you
haven't bothered to take.
Now also,
only one of those studies really talked
about the span of time that the scare
tactics they were studying influenced
behaviors.
Which was partially why I started this
whole endeavor.
Just to recap,
that was the study on the party
intervention in Germany,
where they took students to a hospital
(06:48):
and told them all the bad stuff that
happens when you have a severe injury
from a traffic accident.
They said that previous research found
short-term effects, one to two weeks,
on knowledge and attitudes,
medium-term effects,
six to twelve months on self-reported
risk behavior.
Which again,
ask your workers six to twelve months
after training whether they're doing
the right thing all the time.
(07:09):
Of course they'll say yes.
Self-reporting is what it is.
Oh, and long-term effects,
up to 44 months, on traffic offenses,
injuries, and deaths.
44 months is pretty solid,
but I feel like I'd want to dig into
that a little further.
Anyway,
what I promised for this episode was to
bring in some studies that support the
(07:30):
use of scare tactics.
I'm going to try to do that,
and we'll go through in rapid-fire
fashion like we did the last time.
This time I will try to slow down my
speech a little.
I realized I kinda sped through those
studies last time,
and that's not what I want to do when
things get super technical,
(07:51):
but also really important for everyone
to understand.
Remember this whole thing started when
I made a claim in the aforementioned
ethics course that the lifespan for
scare tactics to influence behaviors
after training was 30 to 60 days.
Now I know I came up with that number
somewhere,
so I dug deep into my previous work and
(08:13):
I found what I was looking for.
So after I go through these studies,
we're going to talk about that.
Spoiler alert,
I didn't quite remember it correctly.
I might have been spreading
misinformation for my own interests.
So,
part of this is going to be the critical
evaluation of Dr.
(08:33):
Law's potentially false claims.
Count yourself fortunate that you get
to listen to me put my foot in my own
mouth.
Get ready for a once-in-a-lifetime
experience on the back half of this
episode.
But, studies first.
Everybody ready?
Let's go.
This first study was published in 2020
by Li, Lu,
and Chen in the Health Education
(08:55):
Journal.
It is titled, Emotions in Fear Appeals,
examining college students' attitudes
and behavioral intentions towards
colorectal cancer prevention in Taiwan.
It was funded by the Ministry of
Science and Technology of Taiwan.
Now,
you're going to ask me why we're looking
at another public health study instead
of workplace safety.
(09:16):
We just went over this.
If you want to know specifically how
scare tactics and workplace safety
training are affecting behaviors,
you gotta start measuring it.
If you remember, in part one,
I had only one study that looked at
workplace safety,
and that was in Turkey.
We've got room for more research here,
folks.
So this study looks at the extended
(09:37):
parallel process model, surprise,
surprise,
and starts off by pointing out
criticisms of the model.
Namely,
it does not examine the role of emotion
and how that influences behavior.
Their argument,
based on previous research,
is that the model assumes that fear is
the only emotion elicited by fear
(09:57):
-appeal messages,
and the other studies have shown that
both fear and anxiety are elicited by
fear-appeal messages as two separate,
discrete emotions.
It also seeks to question the
proposition that threat and efficacy
have a moderating effect on attitudes
and behavior.
Their argument here is that the model
claims that only high threat and high
(10:19):
efficacy will result in behavior
change.
The authors think that the other three
groups, high threat-low efficacy,
low threat-high efficacy,
and low threat-low efficacy,
may also cause behavior change just to
a different degree.
They want to see if there is a
significant difference.
So they sought to examine emotions and
(10:41):
the interaction effects of threat and
efficacy to understand their effects on
attitudes and behavioral intentions
towards colorectal cancer.
prevention.
Now, remember,
I'm going through this study from
beginning to end,
and I have to tell you once again that
I'm getting more excited as I do.
They start with a pretty thorough
literature review where they go through
(11:02):
the previous research,
and that informs their hypotheses.
That's exactly the way it should be
done.
If you want to hear me rant on
constructing hypotheses,
go back and listen to the episode on
the dreaded C word.
So,
the first hypothesis is split into two
parts.
First,
fear is elicited by subjects perceived
severity of colorectal cancer.
(11:25):
Second,
anxiety is elicited by subjects perceived
susceptibility of colorectal cancer.
Honestly,
that's two different independent
variables and two different dependent
variables.
If I were doing this,
I would have actually made that into
four separate hypotheses so that you're
looking at both severity and
susceptibility and their effects on
both fear and anxiety.
(11:47):
That's okay,
I'll give it to them this time.
Then,
they have the second hypothesis split
into three parts.
First,
fear is positively correlated with
change in attitudes and behavioral
intentions towards colorectal cancer
prevention.
Second,
anxiety is positively correlated with
changes in attitudes and behavioral
intentions towards colorectal cancer
(12:07):
prevention.
And third,
joy is not related to changes in
attitudes or behavioral intentions
towards colorectal cancer prevention.
So, we're also looking at joy now.
That's cool.
Lastly,
they have a well-constructed research
question for that final part.
Does the high-threat,
high-efficacy group achieve the best
(12:29):
persuasion effect while the remaining
three groups, high-threat,
low-efficacy, low-threat,
high-efficacy, and low-threat,
low-efficacy,
do not differ in regard to the
persuasion effect?
So, let's get into methodology.
To develop their messages to be tested,
they did a little qualitative research
and collected 603 news articles of
(12:50):
colorectal cancer.
From that,
they picked four articles for each
element in the model, severity,
susceptibility, response efficacy,
and self-efficacy.
They used a focus group to narrow that
down to one high-threat and one high
-efficacy message and one low-threat
and low-efficacy message.
Then,
they ran a pilot study with 86 responses
(13:11):
to find a significant difference in
perceptions across all four model
elements between the two messages.
Then,
they did a pre-test post-test design to
measure perceptions among a final
sample size of 402 college students.
The pre-test set the baseline for the
perceptions and behavioral intentions,
and the post-test,
(13:32):
conducted a week later,
contained either the high-threat,
high-efficacy message,
or the low-threat,
low-efficacy message.
Split about half and half among those
402 students.
And then,
measured perceptions and behavioral
intentions.
again.
Oh, fantastic!
They actually quantitatively tested for
validity and reliability.
(13:53):
This study is good for my soul.
Okay,
lots of great statistical tests in
here.
Anovas and regression models.
Love it.
Okay, let's get down to the findings.
First,
fear had a significant relationship
with severity,
but did not have a significant
relationship with susceptibility.
(14:15):
That's interesting.
Second,
anxiety had a significant relationship
with both severity and susceptibility,
but it looks like severity affected
anxiety more than susceptibility.
Furthermore,
they found that anxiety had a direct
effect on behavioral intentions,
but fear did not.
(14:35):
Oh boy.
Does that mean anxiety is actually what
we're after with these scare tactics
and not fear?
Yes,
anxiety made a positive contribution to
adoption of danger control processes,
the behavior we want,
where they are actually intending to
control the hazard.
Now, that's behavioral intention.
(14:57):
Let's talk about attitudes.
This study found that none of the
emotions—fear, anxiety,
or joy—had any significant effect on
attitudes.
Instead,
it's only perceived threat and perceived
efficacy that affect attitudes.
So what they're saying here is that the
message itself is the only thing that
affects attitudes,
(15:18):
whereas either the message or the
emotions caused by the message can
affect behavior.
Speaking of that third emotion, joy.
Joy also had a positive effect on
behavioral intentions.
Now,
the authors go through this whole discussion
on systematic and heuristic processing
modes for these different emotions,
which I'm not going to fully get into
(15:38):
at the moment.
But they basically conclude that
although both joy and anxiety affect
behavioral intention,
anxiety is more likely to have a longer
lasting effect.
They didn't actually test that.
That comes from previous research.
But they suggest that future research
on fear appeals should look deeper into
this.
(15:59):
That was what I was looking for the
whole time.
Thanks.
Finally,
they found that the high threat,
high efficacy message had the most
persuasion on attitudes and behavioral
intentions, followed by high threat,
low efficacy, low threat,
high efficacy, and low threat,
low efficacy had the least persuasion.
(16:19):
Perceived threat had more effect on
attitudes than perceived efficacy,
and perceived efficacy had more effect
on behavioral intentions than perceived
threat.
The authors conclude that the fear
appeal approach was effective in
changing attitudes and behavioral
intentions towards colorectal cancer
prevention among college students.
(16:39):
However,
they say that high threat and high
efficacy messages work best.
And these messages should elicit an
adequate amount of anxiety to be fully
affected.
So, there it is.
One point for scare tactics.
But again,
that point comes with caveats.
You can't just go about scaring people
(17:00):
willy-nilly.
You have to make sure efficacy is part
of it.
And you should focus on inducing
anxiety more than fear.
Moving on to the next study.
This one authored by Lang, Ru,
and Zhu was published in Safety Science
in 2022.
It's titled Risk or Efficacy,
(17:21):
how risk perception and efficacy
beliefs predicted using hearing
protection devices among different
groups of Chinese workers.
This study was funded by the National
Social Science Foundation in China with
no declared competing interests.
So,
this study actually challenges the extended
parallel process model by instead
looking at the...
(17:42):
construal level theory of psychological
distance which suggests that both
severity and efficacy may not need to
be highlighted simultaneously.
So before we get into the study itself
we need to look at this other
theoretical model.
So this construal level theory of
psychological distance or CLT for short
(18:03):
comes from Trope and Lieberman who have
been working on this for decades
apparently.
I'll include their 2010 article in the
study notes, it's open access.
This model illustrates how
psychological distance plays a role in
decision making.
Basically there are four interrelated
dimensions of psychological distance
spatial, social, temporal,
(18:25):
and hypothetical.
Spatial distance refers to how far the
target object is physically removed
from the person.
Social distance looks at the level of
relational closeness and brings in
things like a person's view of their
behavior,
the causal role of other people, etc.
Sort of this self other thing.
(18:47):
Temporal distance looks at how distant
an event happened or will happen from
the present time and hypothetical
distance deals with the level of
certainty of an event.
Now it's important to note that these
four dimensions are not entirely
separate like fear and efficacy are in
the other model.
Instead they are often correlational.
(19:08):
If you have spatial distance you
probably also have social distance.
Bringing this theory into account.
The authors wanted to bring in factors
such as age and seniority and whether
they might have a moderating effect on
perceived severity of noise damage,
response efficacy,
and self-efficacy on hearing protection
device usage.
(19:28):
They had six hypotheses.
One age moderates the effect of
perceived severity of noise on hearing
protection device usage.
Two seniority moderates the effect of
perceived severity of noise.
3.
Age moderates the effect of response
efficacy on hearing protection and
device usage.
(19:48):
4.
Senurity moderates the effect of
response efficacy.
5.
Age moderates the effect of self
-efficacy.
6.
Senurity moderates the effect of self
-efficacy.
Again, repetitive,
but this is how good research is done.
Also,
notice that when you break it down like
this, the research becomes very simple.
(20:11):
The things we're trying to identify are
not that complicated.
And they are guided well by the
theories we've been discussing.
So they did a survey for this,
and they got 449 workers from five
different factories to respond.
They also had 46 of those workers stay
after the survey for follow-up
interviews.
That part was to assess their opinions
(20:32):
of the hearing protection devices made
available to them at the factories,
relevant policies,
and attitudes of management towards
hearing protection.
So,
I normally don't get into the nitty gritty
of statistics,
but I want to explain a little bit of
what happened here and why I love
regression models so much.
They used linear regression to analyze
(20:54):
relationships between variables.
The reason why I love linear regression
is because not only are you able to see
the strength of a correlation,
you are also able to calculate
variance.
What does that mean?
At any given time,
there are many things that potentially
cause something to happen.
For example,
(21:14):
let's say I want to determine what
affects people's decision to eat eggs
for breakfast.
That decision to eat eggs for breakfast
could be affected by age, gender,
normal daily diet, the price of eggs,
the existence of chronic diseases,
the expectations of others to use eggs,
the tools available to cook eggs,
the amount of sleep the person got the
(21:36):
night before,
the availability of cereal in the
pantry,
etc.
So with linear regression,
let's say I find a significant
correlation between the amount of sleep
and the decision to cook eggs.
I'll also get this other number that
tells me variance,
and I'll be able to say that the amount
of sleep accounts for 11% of the
(21:58):
variance.
That means that sleep amounts to 11% of
my decision to cook eggs for breakfast,
and the other 89% is affected by all of
those other potential factors.
Do you see where I'm going?
This is what gets us a little closer to
causation versus correlation.
(22:18):
I can't say that the amount of sleep is
the exact cause of my decision to cook
eggs,
but I can say that it is potentially 11
% of the cause.
Now I have 89% of other things to
explore in my research to try to paint
the whole picture of causation.
Oh, if anybody asks, or if you care,
(22:40):
variance is your are square number.
Oh,
and you also get this B coefficient which
tells you the expected or average step
change.
Now,
my decision to cook eggs is a simple no
or yes,
represented by the numbers 1 and 2 in
my statistical analysis.
My B coefficient for how sleep affects
my decision might be .358.
(23:04):
That means sleep gets me .358 closer
from a no to a yes decision to make
eggs.
This is why I'm obsessed with
regression models.
They just give a better understanding
of the whole picture of the
relationship between variables.
Okay, back to the study.
I'm not going to go through every
regression calculation, I promise.
(23:24):
I'm sure you might be driving and I
don't want to put you to sleep.
I just went on that whole tangent about
regression models because I want you to
understand why I prefer them and why it
makes me excited about the findings
here.
First,
you may be interested to know that
perceived severity, age,
their interaction effect along with all
control variables including response
(23:45):
efficacy, self-efficacy,
and seniority explained 15% of the
total variances in hearing protection
device usage.
Just think about that for a second.
That tells me that all of these things
together have an effect,
but there are still other factors that
play 85% of the role in whether someone
(24:06):
uses hearing protection or not.
Hmm.
The authors found that age moderated
the effect of perceived severity of
noise on hearing protection device
usage.
Basically,
younger workers were more affected by
perceived severity of noise.
They think this is because older
workers are the ones who are more
(24:26):
likely to be affected by actual
problems from noise damage,
so they are less affected by the
perception and more affected by the
reality.
Seniority, however,
did not moderate the effect between
perceived severity and the use of
hearing protection.
They think this relationship is
normalized because of the training
received as part of job orientation.
(24:48):
They also found that seniority
moderated the effect of response
efficacy and self-efficacy on hearing
protection device usage.
More senior workers' decision to use
hearing protection was more dependent
on efficacy.
Basically,
more senior workers must have the faith
in the effectiveness of hearing
protection and exhibit confidence in
(25:09):
using it in order for them to actually
use it.
What are the implications?
Messaging targeted at younger workers
should highlight severity of noise,
and messaging targeted at senior
workers should focus on perceived
effectiveness of hearing protection and
their confidence in using the devices.
Additionally,
self-efficacy exhibited the strongest
(25:31):
effect on hearing protection device
usage across all six of their
regression models.
compared to other predictors.
This is supported by their findings
from the follow-up interviews,
which highlighted dissatisfaction with
the hearing protection devices their
employers offered.
See,
it doesn't matter so much how scared
they are.
(25:52):
It matters more if you give them the
right tools for the job.
Wait, Dr.
Law,
I thought you were giving us studies
that supported scare tactics.
Well, we sort of got there.
Sorry, this is my podcast.
So it's going to be a little self
-serving.
My bad.
Okay,
now for the moment you've all been
(26:12):
waiting for.
The desecration of Dr.
Law's credibility by digging into his
potentially false claims.
My claim was that the effects of scare
tactics are short-lived,
lasting only 30 to 60 days.
So where did this number come from?
To get this answer,
I had to dig back into my previous
work.
all the way back to five or six years
(26:34):
ago before I ever started really
getting into original research,
before I ever started my doctoral work,
and before I had a true understanding
of research to practice,
good measurement,
and how to fucking write properly.
If you've known me long enough,
you might know that I have a short
presentation series on selling safety.
Now, others talk about this,
(26:55):
but this series is my own spin based on
my own work in learning how to have
influential conversations,
how to identify the value of safety,
and how to align that value with the
motivations of organizational
stakeholders.
That work also turned into my first
peer reviewed article in the
Professional Safety Journal in 2020.
Now, I stand by this work,
(27:16):
but I have to say it was a little
painful going back and reading it,
knowing what I know now.
I've always been a good writer.
I'm convinced that my 4.0 GPA
associated with my doctoral degree is
because my writing was good,
not because the content was always
good.
I may be spewing bullshit,
but my grammar and writing style make
it believable.
(27:37):
Anyway,
my 2020 article is included in the
episode notes here because this is
where I had to start.
First,
I'm going to admit that I had a little
bit of plagiarism here in the form of
having a reference in my reference list
that I never actually cited in the
paper itself.
That's a big no-no.
If you have something in your reference
list,
(27:58):
you have to actually have used it in
the paper along with some form of
proper in-text citation.
I probably did this because I had used
that reference in my original
presentation and I just copied and
pasted the reference list from that
into my article.
That's not a justification.
I know I fucked up.
But I know better now,
(28:19):
and I won't let you use it against me
if I ever grade one of your papers.
So, in this article,
I cited some work by my friend Tim
Page-Bottorff.
If you work in safety and you don't
know that name, you know.
need to learn it.
I have a tremendous amount of respect
for Tim and his work,
and he gives so much back to our
profession.
So Tim had published an article in 2016
(28:42):
about creating safety habits.
In my paper,
I was discussing how selling safety
should incorporate more positive
conversations and turn away from
negative interactions.
Here's the quote that I pulled from
Tim's article.
An OSH team cannot intimidate or scare
employees into changing their habits.
(29:04):
With the average person needing just
under 10 weeks to develop a new habit
and a sizable minority of employees
requiring longer,
no single graphic video or shocking
story will have a consistent impact on
their behavior for the length of time
required to establish a new routine.
Okay,
so we have just under 10 weeks to create
a habit,
(29:24):
but we don't have a number on scare
tactics yet.
But at this point...
I'm almost 100% convinced this is where
I got my number.
So let's dig into Tim's article a
little further and see where he got
this.
His article is also referenced in the
episode notes,
but you'll need access to the
professional safety journal somehow to
(29:45):
read it.
Now,
Tim's article is a best practices article.
So this is one of the articles in PSJ
that is not peer reviewed.
The reason for this is probably because
this article is shorter.
It's a snippet of a larger body of
work, but he has references.
So let's see how deep we can dig.
Quoting Tim's article,
(30:06):
a common belief is that it takes 21
days to form a new habit.
Unfortunately,
it is not quite that straightforward
and that statement is missing a few
words.
One study on changing habits found that
it takes a minimum of 21 days to form a
new habit.
Subsequent research has confirmed that
the 21 day mark is often a best case
(30:26):
scenario and that the median timeframe
for habit formation is 66 days,
end quote.
So we're going to pause here on Tim and
go to the article where he got this
information.
I promise you,
this is important for our discussion,
but I won't spend as much time here as
I normally do on individual articles.
This study by Lally, Van Jarsveld,
(30:48):
Potts and Wardle was published in the
European Journal of Social Psychology
in 2010.
It's titled, How Are Habits Formed?
Modeling habit formation in the real
world.
They took 96 volunteers and had them
choose an eating, drinking,
or activity behavior to carry out daily
for 12 weeks.
They completed the self-report habit
(31:09):
index each day and recorded whether
they carried out the behavior.
The results were able to fit 62 of
these participants on a nonlinear
regression model,
which basically has a curve instead of
a straight line,
but we don't need to dig too deep into
that.
So they found that the median time to
reach 95% of their asymptomatic
(31:29):
symptoms of automaticity,
which is a technical way of saying they
satisfactorily created a habit,
was 66 days.
The range, get this,
was 18 to 254 days.
The difference in this time was not
significant based on the type of habit.
So what we're taking from this is the
(31:50):
median is 66 days to form a habit,
but it could take a lot longer for
quite a bit of folks.
That could explain a little bit of my
30 to 60 days thing.
66 is pretty close.
All right,
back to Tim's article to grab one more
reference.
Quote,
negative communications from injury
-based scare tactics to old school
(32:11):
yelling might correct behavior in the
moment,
but the effect will wear off well
before a new habit is formed.
Positive thinking has been demonstrated
to increase people's ability to build
personal skills and supervisors who
take an approach based on encouragement
and support will likely promote better
habit formation." From this section,
(32:33):
we'll pull one more reference,
so let's go to that article.
This article published in 2008 by
Fredrickson, Cohn, Coffey, Peck,
and Finkel in the Journal of
Personality and Social Psychology is
titled Open Hearts Build Lives,
Positive Emotions Induced Through Loving
-Kindness Meditation Build
Consequential Personal Resources.
(32:55):
Now this, folks,
is not an article I would have clicked
on.
First of all,
it's a little bit on the old side,
it's from 2008.
My preference is to stick within the
last five years for the most recent
research unless the work is seminal,
meaning it was so influential that it
will never not be relevant.
Second, it sounds soft and qualitative,
(33:19):
and that's not super appealing to me as
a quantitative researcher.
Nevertheless,
we will dig into it trying not to judge
a book by its cover,
or an article by its title, whatever,
you get the picture.
These folks had a final sample size of
139 participants.
They did a pre-test post-test design,
which measured mindfulness and
(33:40):
awareness,
agency thinking and pathways thinking,
savoring beliefs, life orientation,
ego resilience,
psychological well-being,
dyadic adjustment,
positive relations with others,
illness, sleep, satisfaction with life,
and depression.
In between the pre-test and post-test,
(34:01):
they spent nine weeks completing a
short report on their emotions and time
spent in meditation, prayer,
or solo spiritual activity over the
past day.
Now, during that nine-week period,
half of the participants were put in
these workshops once a week which
exercised loving kindness meditation.
This meditation helped the participants
(34:21):
practice directing love towards
themselves their loved ones,
their acquaintances, strangers,
and finally to all living beings.
They were also given recordings to
practice at home.
Ugh, you know,
this is probably something I need.
Basically,
they found that this meditation did not
greatly decrease negative emotions,
(34:43):
but it greatly increased positive
emotions.
The increase in positive emotions also
helped curb the impacts of negative
emotions like depression.
They were able to build the resources
that make their lives more fulfilling
and help keep their depressive symptoms
at bay.
So this study gives us a little
ammunition to say that giving people
positive interactions and giving them
(35:05):
something to believe in is actually a
tool in itself to help them be more
successful.
To be honest,
we might have to have another episode
that digs into the impact of positive
reinforcement as opposed to the impact
of scare tactics.
All right, now.
I have brought all of this to you to
say that I have been spreading a little
(35:26):
bit of misinformation.
My claims about the effects of scare
tactics to influence behavior lasting
only 30 to 60 days were not entirely
true.
However,
I think we still learned some things.
Let me list them out.
1.
Theoretically, scare tactics can work,
but 2.
(35:46):
You have to make sure they sufficiently
induce both perceived severity of the
hazard and perceived susceptibility of
the hazard.
3.
You have to make sure the person has
efficacy,
meaning they feel like what you're
telling them to do will actually work,
and they have the ability to do what
you're telling them to do.
Which also means that you have to give
(36:08):
them the right tools to do the job.
4.
If you do happen to induce fear but
fail to ensure efficacy,
you can exacerbate the problem.
A fear control response will occur
instead of a danger control response.
The person may avoid the task
altogether instead of controlling the
hazard to work safely.
Trust me,
(36:28):
you as the safety professional do not
want to be the reason your workers
aren't working.
5.
The effects of scare tactics may have a
time limit.
However,
we don't know exactly what that time
limit is.
The time limit is probably dependent on
how long the efficacy lasts,
and efficacy is affected by many
(36:50):
things,
so it's a little fragile and a lot to
maintain.
6.
On the emotions you're trying to induce
for scare tactics to be effective,
you may have to focus more on inducing
anxiety rather than fear.
7.
If you use scare tactics,
you may have to tailor the message
based on factors like age and
(37:11):
seniority.
8.
If you use scare tactics,
you sure as hell better make sure it's
legal and that you consider the ethics
around what you're doing.
9.
Positive messaging may be more
effective,
but we need to do a little bit more
research on that to really know.
I still don't understand what the hell
(37:31):
a multimodal discourse analysis is,
and finally 11.
It is entirely possible to dig through
my previous work and poke holes in it.
Dr.
Law is not immune to critical feedback.
And that, my friends,
is all I have time for today.
I hope that you've enjoyed part 2 of 2
of my discussion on scare tactics in
workplace safety training.
(37:52):
Until next time, I'm Dr.
Matt Law.
This has been another episode of Study
Finds on the Prove It To Me podcast.
Take care and stay safe, everyone.
(38:21):
You can find this podcast on Podbean,
Apple Podcasts, Spotify, YouTube,
Amazon Music and iHeartRadio.
Like what you've heard so far?
Please like,
subscribe and follow wherever you get
your podcasts.
And leave a five-star review on Apple
Podcasts.
Got questions about what we talked
about or research that you wanna share?
Send an email to contact at proveitpod
.com.
(38:44):
The views and opinions expressed in
this podcast are those of the host and
its guests and do not necessarily
represent the official position opinion
or strategies of their employers or
companies.
Examples of research and data analysis
discussed within this podcast are only
examples.
They should not be utilized in the real
world as the only solution available as
they are based on very limited,
often single use case and sometimes
(39:06):
dated information.
Assumptions made within this discussion
about research and data analysis are
not necessarily representative of the
position of the host,
the guests or their employers or
companies.
No part of this podcast may be
reproduced,
stored in a retrieval system or
transmitted in any form or by any means
mechanical, electronic,
recording or otherwise without prior
written permission of the creator of
(39:27):
the podcast.
The presentation of the content by the
guests does not necessarily constitute
an active endorsement of the content by
the host.