Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
I know it's not Halloween,
but I have a very spooky subject to
cover today at the request of some very
curious students.
And you know what they say about
curiosity.
Ha ha ha.
It's an all new Prove It To Me.
(00:47):
Hello everyone and welcome to Prove It
To Me.
I'm your host, Dr.
Matt Law.
I'm going to do something a little bit
different with this Study Finds
episode.
This isn't something I found on the
news or in an article somewhere.
This is sort of a special request.
You see,
I have a graduate level ethics class
that I am currently co-teaching.
(01:09):
It's fun, it's dynamic,
and it inevitably challenges the way
students think about certain situations
and how they manage workplace safety.
It's also challenging for me because a
lot of ethical debates live in this
realm of uncertainty.
There isn't a lot of data.
No way to quantify the impact of
certain approaches and issues within
(01:29):
ethics.
Sometimes it feels like fluff,
even though the conversations are
extremely valuable and absolutely
necessary for the work we do as safety
practitioners.
But for me,
as a quantitative researcher,
sometimes I'm uncomfortable living in
that realm of uncertainty.
Anyway,
there's this part of the course where
we discuss the macabre.
(01:51):
I do love this section because it
always sparks debate and challenges the
way students think,
but it's also something I consider to
be a huge plague to our profession.
Okay, Macabre,
what the hell are you talking about,
Dr.
Law?
I'm sure most folks listening right now
have sat through some kind of safety
training in their careers.
In fact,
(02:12):
because of the nature of this podcast
and the listenership,
many of you have probably delivered
safety training to your workers.
There's this really insanely popular
way to get the point across during
safety training that involves using
graphic imagery or videos,
the kind with blood and guts and
uncomfortable situations that are
surely are used to scare our workers
(02:33):
into behaving a certain way.
We couple this Macabre imagery with
language like,
don't let this happen to you or this is
what happens when you don't follow the
rules.
They're all gross.
And the worst are the ones that
actually show fatalities.
Someone got killed in that video.
Now,
there is a huge discussion about ethics
to be had around the use of Macabre
(02:53):
imagery, and it's multifaceted.
First,
we have the topic of psychological
safety.
Will the folks on the receiving end of
graphic imagery in videos experience
any negative feelings after viewing the
imagery?
Could trauma be induced?
Could it exacerbate the underlying
issues such as post-traumatic stress
disorder or PTSD?
Now don't start with me about how your
(03:16):
workers are in construction or heavy
manufacturing,
and these are tough guys,
and if they can't handle that,
they shouldn't be doing this kind of
work.
Look,
I've been in the military for 15 years
and I've seen the toughest and the
strongest and I've seen them get taken
down by the smallest things that affect
their mental health.
Does it make them less tough?
(03:36):
No.
Trauma is real and it causes real,
serious, long-term effects,
and you can't pretend to know how
anyone is feeling or how they will be
affected by certain things.
Second,
where does this imagery come from?
Do we own it?
Was consent given by the injured,
the deceased,
or their families to use this imagery
(03:57):
for training purposes?
How do the families feel about their
dead loved ones being put on display
for all of us to see?
Was consent given by the organization
where the incident occurred?
Or did someone take a cell phone video
while watching surveillance recordings
then posted on social media?
Yeah, that one's not only unethical,
it's illegal.
(04:17):
Third,
what kind of light does this shine on
the people involved?
You see,
since I don't do a ton of training for
workers or sit through much safety
training myself in my current roles,
I get to see these images and videos
posted on social media platforms like
LinkedIn.
You know,
the acclaimed social media network for
supposed professionals?
This is a great opportunity for a study
(04:39):
on the current state of the safety
profession.
Just jump to the comments section.
I guarantee you'll find a bunch of so
-called safety professionals talking
about how stupid the person was who got
hurt or killed in whatever image or
video got posted.
You'll have entire comments sections
trying to do a full on investigation or
root cause analysis from 30 seconds of
(04:59):
video.
Like seriously?
This is why I hold so much cynicism for
our current state of affairs.
Is that how they do their jobs?
No wonder we can't move the needle on
workplace safety.
On top of that,
these things get posted as quote,
safety fails,
creating a mockery of people involved
in incidents or unsafe situations.
They are designed to blame the people
(05:21):
shown for the unsafe situation we see.
But the real buffoonery is in the self
proclaimed safety professionals who
post and comment on this stuff thinking
they're doing anything to help the
situation.
So that's the ethical discussion.
And there's a lot more to be said about
it.
But one of my favorite things to do is
bring quantitative data into that part
of the discussion.
(05:42):
I don't do it right away.
I like to let the students stew and
theorize over the do's and don'ts
before I bring the hammer down.
You see,
there's a reason we started using the
macabre in our safety training.
It's a change behavior.
Now,
when I teach safety to safety professionals,
(06:02):
I spend little time in behavioral-based
safety.
I spend more time on system safety
discussing the organizational and
physical mechanisms we can put into
place to control hazards, reduce risk,
and avoid human error entirely.
Humans are humans and do human things,
so the dependence on behavior entirely
is just not the most reliable way to
(06:23):
ensure safety.
When, and I mean when, humans do fail,
we want them to be able to fail safely.
However,
I do have a strong and lengthy background
in behavioral science.
Yes, strong and lengthy.
I've performed a ton of behavioral
research,
and I consider behavioral science to be
(06:43):
one of my core skill sets.
So naturally,
I have data to present when we talk
about things that affect behaviors.
So here's how the conversation went
most recently.
I said,
so how long do you think scare tactics
actually affect behavior after they are
used?
The students give their best guesses
and go back and forth for a minute or
(07:04):
so.
Finally, I dig into my databank,
meaning just what's sitting in my
memory from the research I've done in
the past.
I puff out my chest, hold my head high,
and I say, it's 30 to 60 days.
Now, for a good half a minute,
there are eyes that open wide,
murmurings and such,
and I finish my thought by saying yes,
(07:25):
compared to other factors that affect
behavior,
such as positive reinforcement,
scare tactics have a much shorter
lifespan in their ability to affect
behavioral change.
I'm always super proud of my ability to
cut through the theorizing and
contemplating at this point because I
just presented facts,
and now there's no more room left to
ponder.
This time, though, a student said, Dr.
(07:48):
Law,
would you be able to provide that research
for us?
I hesitated.
They got me.
I mean,
I distinctly remember that fact from my
research,
but I haven't at it in years at this
point.
Could I find it again?
Does it still hold up?
Is it relevant?
Has it been disproven?
Yeah,
students know how to make me sweat.
(08:10):
So finally I sigh and say, yes,
probably.
Let me find it and circle back to you.
That's what we say, right?
Circle back, circle back.
We're always circling back just like
we're running laps in life.
Circle back.
Then I thought, you know what,
this would make a good podcast.
I'm going to study finds myself.
(08:32):
Did I give the right information or am
I full of shit?
All right,
so here's how this is going to work.
Of course, I put my search terms in,
and I came up with a ton of articles.
I was not surprised to find a
collection of stuff that supports my
claims,
but then also a number of studies that
conclude that scare tactics work to
(08:52):
some degree.
I've got two main parts to how I'm
going to structure this.
First.
I'm going to make this a two-part
podcast because I care about your time
and frankly, I care about my time.
In part one,
I will present the studies that seem to
support the claims I made about scare
tactics being less effective than other
training methods.
In part two,
I will present the studies that create
(09:14):
the opposing viewpoint.
You know,
because I'm fair and balanced.
But I mean fair and balanced in a way
that actually gives a fair shot to both
sides rather than heavily leaning to
the side that is seemingly opposite of
other mainstream sources of
information.
You get the picture, right?
The other part is that I need to make
this a rapid fire evaluation of
(09:35):
multiple studies.
Rather than spending a whole episode
talking about one study,
I need to give you several in one
episode.
Think of it like a quasi-meta analysis,
except this is in no way as thorough as
a real meta analysis.
I'm going to challenge myself to spend
no more than 10 minutes on each study
and I will give you the high points,
(09:56):
the intent, the design, the findings,
the limitations and my best opinion on
whether it's a good study or not.
Cool, are you ready?
I hope so.
We got a lot to get through today,
so let's get started.
Oh, one quick note.
In behavioral science research,
scare tactics are also referred to as
fear appeals or threat appeals.
(10:17):
So you'll hear me go back and forth on
terms as we go through these studies.
Today,
I have four studies and I'm going to go
in chronological order by publishing
date.
This first one is by Carey and
colleagues and was published in PLOS
One in 2013.
The title is,
The Impact of Threat Appeals on Fear
Arousal and Driver Behavior,
(10:40):
A Meta Analysis of Experimental
Research 1990 to 2011.
This is actually a great place to start
because a meta analysis brings in a
whole bunch of research together to try
to paint a bigger picture rather than
being super hyper-focused on one
specific application.
I may even cheat and spend a little bit
more.
time here since this will probably give
(11:00):
us a bunch of information.
Real quick, just so we cover this base,
this study was funded by a bursary from
the Road Safety Authority of Ireland.
The funders had no role in study
design, data collection and analysis,
decision to publish,
or preparation of the manuscript,
and the authors have declared that no
competing interests exist.
On to the study.
(11:20):
The authors state, quote,
threat appeals have been widely
utilized in road safety advertising
campaigns in an attempt to discourage
risky driving and typically present
graphic representations of the death
and injury that may occur as a result
of a road traffic collision.
Okay,
so this is exactly what we're looking
for.
Solid.
(11:40):
Their goal was to clarify the
usefulness of threat appeals by looking
at experimental research that has
already been conducted,
and they state that up to this point,
no other meta-analysis has been done to
specifically look at the relationship
between threat appeals and driving
behavior.
Now,
the authors spend a good page or so
talking about previous studies that
(12:01):
look at the relationship between threat
appeals and behavior and introduce
theoretical models.
That's great, actually.
Good analysis should start with a
model.
Models help us visualize the
relationships between variables,
know which variables to measure,
and understand which relationships to
measure as we gather and analyze data.
An important thing to note about
(12:22):
models, though,
models are not telling you for certain
that relationships exist.
They might be created based on
preliminary research that established
those relationships,
but they are not intended to tell you
that it will always work that way.
They are basically a map that guides
you towards the things you want to
measure as you do the research.
(12:43):
So don't ever let somebody hand you a
theoretical or conceptual model and
tell you that this is the way things
are.
That's not how models work.
One important model that the authors
discussed is the extended parallel
process model.
I found this model and I'll include a
couple of links in the episode notes so
that you can see what I'm talking
about.
Let me walk you through it because the
way this is presented might actually
(13:04):
inform how we look at the other studies
as well.
Basically,
this model seeks to show relationships
between variables that help determine
if a fear appeal or a threat appeal
will result in a positive behavior
change, if it will be rejected,
or even if it may exacerbate the
problem.
If the person receiving the message
thinks the threat is not severe enough,
(13:25):
or they think they are not susceptible
enough to the threat,
the message is rejected.
If the message has a high enough threat
appraisal in terms of severity and
susceptibility,
we move on to the efficacy appraisal.
Efficacy is also divided into two
parts.
Response efficacy,
which assesses whether the person
receiving the message believes the
recommended behavior will prevent or
(13:46):
reduce the threat, and self-efficacy,
which assesses whether the person
receiving the message believes they are
capable of doing the recommended
behavior.
Again,
if the efficacy If the efficacy appraisal
is low, the message is rejected.
But not only that,
the message has already induced fear.
So not only have you not changed the
behavior,
(14:06):
you have also scared them shitless,
and that's where the exacerbation can
occur.
We're only making the problem worse.
Finally,
if both the threat appraisal and the
efficacy appraisal are high,
then we get the behavior change we
intended.
One of the links I'm including will
show you the model,
another will show a matrix similar to a
(14:27):
risk matrix,
so pick your poison on how to best
digest this information.
Now,
the reason I feel it's so necessary to
dig into this model specifically is
because of that efficacy piece.
I think we are going to see that quite
a bit in these studies.
If we do want to scare the shit out of
people with graphic imagery and videos,
they also need to feel like they have
(14:47):
the ability to do what we want them to
do.
What affects efficacy?
Lots of things.
The environment around a person.
What others are doing.
what they think others will think of
them, the tools available,
historical knowledge,
all sorts of things.
Are you starting to see why behavioral
-based safety can get a little
challenging to try to accomplish?
(15:08):
The authors say this about what they
found in the literature that helped
them build the background to this
study.
Quote,
key points to emerge from these models
and from existent empirical research in
general is that the underlining
psychological mechanisms at play in the
fear-to-behavior relationship are
likely to be complex and do not always
(15:29):
involve direct causation.
Rather,
there are likely to be moderators and
mediators of this relationship and a
key task for researchers and
advertisers is to identify and better
understand these factors.
Dr.
Law, what are moderators and mediators?
I'm glad you asked.
Moderators, or moderating factors,
affect the relationship between
(15:50):
variables.
For example,
if I want to figure out how different
dish soaps affect grease,
the way those dish soaps affect the
grease might be impacted by the type of
material of which the dish is made.
The dish material is the moderator to
that relationship.
A mediator, or a mediating factor,
(16:11):
lies on the causal pathway of that
relationship and it's also directly
affected by the independent variable.
I'll have to use a different example
for this.
The amount of sleep I get might affect
how much work I get done the next day.
In fact,
the amount of sleep I get affects my
alertness,
which in turn affects how much work I
get done.
Alertness here is the mediator.
(16:33):
In the model we just talked about,
you could say that efficacy is a
mediator between a threat appeal and
the desired behavior.
In fact,
I believe we might see some studies
that explore that exact relationship.
Okay, I see my clock is ticking,
so let's dig into the rest of this
study.
In a meta-analysis,
you treat articles as your participants
when it comes to creating a valid
(16:54):
sample.
With that, you have exclusion.
criteria.
So the authors started with 54
articles.
They excluded 16 of the initial lists
because they didn't have a control
group or didn't control confounding
factors.
They truly wanted experimental studies.
They excluded another 15 because those
studies did not provide statistics to
compute effect sizes.
(17:15):
Then they excluded another 10 to end up
with only 13 that looked at the
difference in behavior between
participants exposed to threat appeals
and those in a control group.
The total sample size of participants
across these 13 studies was 3044.
So what did they find?
Four of these studies measured fear as
an outcome.
(17:36):
Total sample size of 619.
They found that threat appeals had a
statistically significant effect on
fear arousal compared to control
groups.
In other words,
scare tactics statistically
significantly scare the shit out of
people.
Next,
they looked at the overall effect of
threat appeals on driving behavior.
I'm going to read this verbatim, quote,
(17:58):
no significant effect of threat appeals
on the driving outcome variables
emerged.
When we examined each outcome variable
separately,
we found no significant difference
between threat appeal and control
groups on self-reported intention to
take driving risks or in driving
simulator speed or speed during a video
speed test.
They continued, quote,
(18:19):
studies using a video-based
manipulation produced particularly
strong effects on fear with no effect
on driving behavior or intentions.
Bottom line,
we scared the shit out of them,
but it didn't work.
So let me talk through limitations to
this study.
There are some that the authors talk
about,
but some that I observed as well.
First,
(18:40):
only four of the 13 studies measured
fear, not just the behavior outcomes.
If we go back to that extended parallel
process model,
both fear and efficacy are potential
mediating factors.
Without both of these things,
according to the model,
a behavior change might not happen.
With only fear,
we could make the problem worse.
(19:00):
The other part to this is that fear was
measured by self-reporting,
meaning that the participants rated on
a scale how scared they were after
being exposed to the scare tactic.
There are other ways to measure fears
such as heart rate, skin conductance,
et cetera,
that were not used in any of these
studies.
Also, other emotions such as guilt,
(19:21):
shame, and anger were not measured.
Previous research shows that these
emotions interplay with fear and could
determine the effectiveness of the
threat appeal.
Also,
as much as efficacy was discussed,
it was also not measured.
Lastly, for me,
we did not observe actual driving
behaviors.
We measured self-reports of behaviors
and we observed simulated driving.
(19:43):
Folks,
I have to say that my behaviors in the
popular video game Grand Theft Auto do
not reflect my behaviors in real life.
Even so,
we have this study that concludes that
while previous research threat-based
messaging can be effective under the
right conditions,
there is little evidence to suggest
that it consistently works.
(20:03):
Everybody still with me?
Okay, next study.
I have Brooks and Harvey,
who published a study in Social
Semiotics in 2015 titled,
Peddling a Semiotics of Fear,
a Critical Examination of Scare Tactics
and Commercial Strategies in Public
Health Promotion.
Just as a side note,
I had to Google the word semiotics.
(20:23):
From Wikipedia,
which I am personally deeming perfectly
acceptable in the cases where my
vocabulary is challenged,
semiotics is the systematic study of
sign processes and the communication of
meaning.
In semiotics,
a sign is defined as anything that
communicates intentional and
unintentional meaning or feelings to
the signs interpreter.
(20:45):
Okie doke,
so we're looking at how messages are
interpreted by the receiver.
I'm with you.
I just needed a minute to get my
vocabulary straight.
Looks like both of these authors worked
at the University of Nottingham in the
UK,
the first as a doctoral researcher in
health communication,
and the second as a lecturer in
sociolinguistics.
These folks basically looked at scare
(21:06):
tactics used in a nationwide health
promotion campaign in the UK aimed at
raising the public's awareness of type
2 diabetes.
Now personally,
I love diabetes research.
Mostly I love it because there is a ton
of it,
and any kind of specific function or
intervention that exists for general
public health initiatives has almost
certainly been both applied and
(21:27):
researched for type 2 diabetes.
We still haven't fixed it,
but we've researched the hell out of it
and I'm here for the data.
It looks like the intent of this
campaign was to advocate personal
responsibility for assessing both
individual and others' risk of type 2
diabetes.
The authors separated out three
different messaging techniques used in
the campaign.
(21:47):
First,
the depiction of grief and amplification
of diabetes-related danger.
Second,
the promotion of diabetes risk and
localization of individuals
responsibility for their health.
And third,
the commercial branding and framing of
this Diabetes UK and Tesco partnership,
the folks putting on this campaign.
Diabetes UK is a diabetes charity and
(22:07):
Tesco is a British supermarket chain.
The branding and framing of the
partnership included promotion of goods
and services as a means of diabetes
prevention and management.
So this study is a critical multimodal
discourse analysis.
They analyzed the communication in this
campaign examining how the different
pieces convey meaning and ideology.
This is not my forte.
(22:29):
I'm going to make up some time here
because I can't critically evaluate
this with my own knowledge and skill
set.
I'd love to bring a guest on eventually
to discuss this with me, but frankly,
I don't have a whole lot to offer you
on this one.
In fact,
the nature of this study is not
particularly useful to me because there
are no numbers.
There's no data.
Critically analyzing meaning?
(22:50):
This isn't experimental at all.
We're just creating an argument.
So what is the argument?
I'm jumping down to the discussion
because the analysis itself is meaty.
You're welcome to follow the resource
in my episode notes if you have access
and you like this type of study.
Personally, it's not for me,
but I'm not gonna knock it.
After all,
Wikipedia says the semiotics thing is
(23:12):
in fact a science, right?
I'm going to pull this part here
because I think we can still spark some
thought.
Quote,
this prominent campaign emphasizes the
dangerous and potentially fatal
consequences of individuals not taking
responsibility for personally assessing
and responding to their and their
families risk of diabetes,
whether they are actually at risk of
(23:33):
contracting the disease or not.
Consequently, the campaign presents,
we argue,
only a partial picture of both diabetes
and the actual lived experiences of
people who have the condition.
Hmm.
This portrayal obscures the reality of
diabetes as a disease,
which Although can indeed have various
(23:54):
serious,
long-term biological consequences,
people are able to manage successfully,
living full and long lives.
Thus,
the campaign mirrors the somewhat sensationalist
way in which some quarters of the media
report illnesses,
with a strong focus on the debilitating
and fatal aspects of the disease,
(24:14):
at the expense of reporting how people
actually live with and overcome the
effects of illnesses.
Ah, I see.
This actually takes us back into the
ethical discussion,
but from a slightly different
perspective.
The authors are taking issue with the
fact that the campaign images did not
reflect reality.
The question they are asking is,
(24:35):
what is the point of scaring people if
the outcome portrayed does not
accurately reflect the likely
experienced outcome and if the
interpretation of meaning is in
question?
Let me put it another way.
Does the macabre imagery we use in
safety training accurately depict the
outcome we can expect?
What is the likelihood that that exact
(24:56):
scenario would happen again?
If workers are able to work around the
hazards depicted and never experience
the injuries depicted in the graphic
imagery,
does it convolute how meaning is
derived from the imagery?
I think these are interesting
questions,
but not necessarily getting me the data
I was looking for.
Let's get back into some real data,
(25:16):
shall we?
That's what we're all here for, anyway.
This next study by Turkle and
colleagues was published in 2020 in a
journal that I cannot pronounce,
but thanks to Google Translate,
I can tell you it is the Turkish
Journal of Communication Research.
The title of the study is Use of Fear
Appeal in Work Safety Messages,
an Experimental Study.
(25:37):
Finally,
we've got a study that looks at this
stuff in workplace safety.
Perfect.
I'm glad everyone has stuck with me so
far, so let's dig in.
This study comes from the Izmir
University of Economics.
It looks like this article was a d-
from the master's thesis of the third
author and the two lead authors are the
students' professors.
(25:57):
Oof.
I guess that's how they do it in
Turkey.
Still, at least they've got this.
I still haven't turned my doctoral
study into a published article.
One day, I guess.
This study sought to compare attitudes
perceived ethicality, fear emotions,
and behavioral intention of workers
when exposed to a stimulus containing
the threat of physical injury message
(26:18):
compared to a stimulus without such a
message.
Now,
a lot of the background here about fear
appeals is very similar to what we
talked about in the first article.
To no surprise,
the extended parallel process model is
discussed along with some other
theoretical approaches including drive
theories, the parallel response model,
and protection motivation theory.
(26:39):
I'm getting the sense here that the
extended parallel process model is the
latest and greatest and seems to be
highly regarded.
I'm going to jump down to hypotheses.
I'm excited because so far on this
podcast.
I don't know if I have yet shared a
study with real,
well-written hypotheses.
You ready?
Here we go.
H1, that means first hypothesis.
(27:00):
Workers' responses will differ on
attitude towards the message when
exposed to a stimulus containing a
threat of injury message compared to
one without any threat of an injury
message.
H2,
workers' responses will differ on perceived
ethicality when exposed to a stimulus
containing a threat of an injury
message compared to one without any
(27:20):
threat of an injury message.
See the pattern?
Workers' responses will differ on x
when exposed to a stimulus.
It's repetitive, sure,
but it makes this contained,
consistent, and measurable.
We are only changing the dependent
variable in the hypothesis.
So, H1 is attitude towards the message,
(27:42):
H2 is perceived ethicality,
H3 is fear emotions,
and H4 is behavioral intention in work
safety.
Folks, you don't even know,
I am in my happy space.
This is what I expect from good
research.
Give me more, come up.
So let's look at methodology.
The authors used a print advertisement
(28:03):
from WorkSafe Victoria and modified it
into two different versions for the
study,
one that has graphic imagery and one
that doesn't.
The message in Turkish says, quote,
before it's too late, in Turkey,
661 work accidents occur every day.
Don't let one of them be you.
Take precautions, end quote.
If you open up this article,
(28:24):
the images are in the appendix and I
had a little laugh out loud moment.
It's the same image of this guy sitting
at a table looking super serious.
In one image, he has no hand.
Instead,
he has a stub of a forearm and obviously
lost his hand in some kind of gnarly
accident.
In the second image,
his hand has been Photoshopped back on,
(28:45):
I'm guessing.
At first glance,
maybe you wouldn't think that he didn't
originally have a hand in this photo,
so I...
guess it works.
It just made me laugh,
but I have a weird sense of humor.
They actually did some good work to get
rid of confounding factors.
For example,
they ran a pilot to determine what type
of logo to use,
making sure the logo itself did not
(29:05):
elicit any positive or negative
feelings.
They also ran a chi-square test of
independence to ensure fatalism,
tendency,
and self-efficacy were matched between
the two groups,
noting that both of these affect
behavioral intention and that work
accidents happen more frequently in
firms where workers have higher levels
of fatalistic beliefs about work
(29:25):
accidents.
That's probably a study for another
time.
They also ran a pilot to analyze items,
assess internal consistency,
and stability of measures.
Heyo!
Statistical validation of the
instrument.
Love it.
So they had a total of 300
participants, which is pretty solid,
although it was a master's thesis.
(29:49):
Frankly,
it's impressive that this type of
research was even done for a graduate
-level degree.
Anyway,
split the sample into two groups.
One gets the advertisement featuring
the guy with the stub,
and the other gets the guy with a hand.
Remember, everything else is the same,
including the message.
What did they find?
Well,
the group that got the guy with the
(30:09):
hand statistically significantly had
more positive feelings toward the
advertisement,
and they also felt it was more ethical.
Here's the kicker.
No statistically significant
differences in fear emotion or
behavioral intentions between the two
groups.
Now, to be fair,
the reason it's good to dig into this
one is because there are some
limitations and maybe some additional
(30:30):
things we can infer knowing what we
know from other things we've looked at.
First, obviously,
what we found here is that showing the
guy with the stub was unnecessary for
the message.
We get basically the same behavioral
intention if we just show the guy with
his whole hand.
Second,
there was no difference in fear emotion
between the groups.
(30:50):
It's not really clear here,
but it's possible neither of the groups
were scared enough at all.
Remember,
if we go back to our favorite extended
parallel process model,
we need both fear and efficacy to
create behavior change.
Also,
since the level of behavioral intention
is also not clear,
we don't know for sure if this
(31:10):
advertisement made any difference at
all in behavior,
whether the guy had a hand in the photo
or not.
The authors fairly bring up the fact
that more research needs to be done on
different types of media,
such as video.
Then you've got your normal limitations
like the fact that this was a
convenient sample and all of the
results were self-reported by the
participants rather than being observed
(31:31):
objectively.
Overall,
what I'm taking from this study is that
it was performed at the level of
thoroughness and rigidity that I expect
from good research.
Graphic images can affect positive and
negative feelings and continue to bring
up questions of ethicality,
but overall were unnecessary.
to create behavior change in this
instance.
(31:52):
Okay, are you ready for the next study?
Do you need a break?
Just push the damn pause button on the
podcast.
It's the one with the two vertical
lines.
I'm not responsible for your breaks.
Sheesh.
When you come back,
push the button with the sideways
triangle.
Here we go.
Last study I have for today.
This study by Kohler and colleagues was
published in BMC Public Health in 2022
(32:14):
and it's titled Change of Risk Behavior
in Young People.
The effectiveness of the trauma
prevention program P-A-R-T-Y or PARTY,
considering the effect of fear appeals
and cognitive processes.
That's a long ass title.
This one is Open Access,
so you can read this whole thing using
the link in the episode notes.
This study was conducted in Germany.
(32:36):
Open Access Publishing was funded by
Project Deal.
One of the authors was involved in the
implementation of the PARTY program and
another is employed in the research
department of Project Deal.
Aside from that,
no competing interests and the study
was reviewed for ethical approval by
Bielfeld University.
So,
the purpose of this study was to examine
the effectiveness of the Injury
Awareness and Prevention Program PARTY,
(32:58):
which stands for Prevent Alcohol and
Risk-related Trauma in Youth in
Germany.
What this program does,
it's a one-day injury awareness and
prevention program for you.
A school class will spend a day in a
trauma hospital experiencing the
various wards through which a seriously
injured person goes.
Brutal.
Apparently,
(33:18):
they've been doing this in Canada for
more than 30 years.
It's also been used in Australia,
New Zealand, and the US.
So how does this work?
The students get two half-hour lectures
on trauma and prevention,
usually held by a trauma surgeon for
the trauma part and a police officer
for the prevention part.
Then they take the students through the
(33:39):
individual wards, the ambulance,
the trauma room, the intensive ward.
They are guided by a medical or nursing
hospital staff member.
Sometimes,
the students can talk to the patients
there if it's possible.
They also visit physiotherapy to get an
idea of how tedious and difficult
rehabilitation can be after a serious
(34:00):
injury.
Finally,
there is a 20-minute talk with a former
serious injured patient,
and they end with a joint reflection on
the day.
Gonna be honest,
this sounds a lot like some workplace
safety trainings I've seen,
especially the part where we hear a
story from somebody who got injured on
the job.
Now,
since this type of program has existed
(34:21):
for quite a while,
it has been studied before.
Previous research found short-term
effects, 1-2 weeks,
on knowledge and attitudes.
That's not very long.
They've also found medium-term effects,
6-12 months,
on self-reported risk behavior.
There's that self-reported thing again.
Of course, I'm being safe, Dad!
(34:42):
And they've apparently found long-term
effects,
up to 44% months on traffic offenses,
injuries, and deaths.
So this study specifically wanted to
look at the fear appeals of the program
and measure its effectiveness on risk
behavior in traffic and to look at how
the fear appeals affect beliefs.
They noted that there are a lot of
complications here,
(35:03):
bringing up again the extended parallel
process model,
efficacy and behavioral beliefs,
normative beliefs,
and control beliefs associated with the
theory of planned behavior.
So what are all of those?
Behavioral beliefs are attitude results
from previous experience with a
behavior,
whether they be positive or negative.
Normative beliefs are the perceived
(35:25):
pressures to comply with the
expectations from other people.
See?
I told you that would come up.
Control beliefs are how a person
perceives they can actually execute a
behavior, again,
getting into that efficacy piece.
Okay, methods.
How did they do this study?
They looked at 19 party day
intervention classes among 12 schools
(35:46):
and seven different trauma centers.
They used 11 control classes,
classes that didn't do a party day.
Apparently some of the schools weren't
able to provide a control class,
so that's why they have less in that
group.
And they did a pre-post-test design.
That means they tested each class
before the intervention and after the
(36:07):
intervention.
They tested a total of 574 students,
including 300 that did the intervention
and 244 who did not.
Now,
as far as the measurements and statistical
tests,
this study is actually very in-depth.
They measured a bunch of different
relationships between a bunch of
(36:28):
different variables,
including intention, threat,
susceptibility, severity,
self-efficacy, and behaviors.
They covered all the bases here,
and I don't have time to go through
everything,
and you probably don't want me to.
Like I said, this one is open access,
so you are free to take a look and dig
deeper if you want.
I will say the pre-test shows that all
(36:49):
participants were on the same level
demographically and on their
willingness to take risks,
so we've got a good baseline.
Let me just read a few excerpts from
the results summary so we can get what
we're trying to find.
The authors say, quote,
the results show that the intervention
does not directly increase feelings of
threat.
However,
the program obviously affected a
(37:11):
theoretical determinant of the threat,
namely the perceived severity whose
change is directly related to a change
in approved behavior.
Nevertheless, in turn,
it does not increase a person's
confidence in his or her ability to
counter this threat actively,
end quote.
Basically,
we got across how severe an injury
(37:31):
could be,
but we didn't change their self
-efficacy,
and the authors say this provides one
possible explanation for why this
program does not.
not have the intended effects on
behavior.
Okay, a few more bullet points.
Quote,
immediately after the intervention,
small positive effects could be shown
for most of the parameters.
(37:52):
These effects could not be observed in
the medium term,
and especially not for self-reported
behavior.
In other words,
we saw a little bit of lift in the
intended direction,
but we did not measure behavior for a
long time after the intervention,
so that's why we can't tell you from
this study what's happening here.
Next, quote,
both in the short and medium term,
the results showed a significant effect
(38:13):
only on the threat-related
characteristic perceived severity of
accidental injuries.
Again,
we succeeded in showing how bad an
injury could be.
Finally, quote,
the predictive influence of cognitive
beliefs in particular on a change in
behavioral intention could be
confirmed,
such as self-efficacy or social norms.
(38:33):
However,
the program had no or only short-term
effects on these factors.
So,
This type of intervention didn't really
affect how they felt about the
behavior,
how they perceived the expectations of
others to execute the behavior,
or whether they had control or self
-efficacy over the behavior.
The conclusion here is that fear
(38:53):
appeals,
the primary function of this program,
are unlikely to prevent or reduce risky
behaviors because it's not affecting
all of these other belief factors.
We're not affecting self-efficacy,
so we are not giving these students the
tools they need to actually do what we
need them to do.
Quote,
fear appeals may arouse the interest
(39:14):
and attention of young people as a kind
of door opener,
but without strengthening psychosocial
beliefs, namely self-efficacy,
they seem to be more a flash in the
pan.
I love it when researchers get
idiomatic.
Continued, thus,
the party program seems to be
classified in a series of educational
measures to change behavior,
(39:36):
which produce short-term effects,
but for which no behavior-related
effects can be proven in the long term.
And that, folks,
seems like a great place to end this
episode.
What did we learn?
Four different studies I presented here
question the effectiveness of scare
tactics as a method of getting people
to do what we want them to do,
and three of them, anyway,
(39:57):
have quantitative analysis with sound
data to say it's not working the way we
intended it to work.
So, this has been part one.
In part two,
I'll present the other side,
and I'll critically evaluate those
studies as well.
I hope this has been as thought
-provoking as the topic always is for
my ethics course students.
(40:17):
Now, the question is,
where the hell did that study go that
gave me that 30 to 60 days number?
Until next time, I'm Dr.
Matt Law.
This has been another episode of Study
Finds on the Prove It To Me podcast.
Take care and stay safe, everyone.
(40:49):
Prove It To Me is produced by me,
Matt Law,
original music by West London.
You can find this podcast on Podbean,
Apple Podcasts, Spotify, YouTube,
Amazon Music, and iHeartRadio.
Like what you've heard so far?
Please like, subscribe,
and follow wherever you get your
podcasts,
and leave a 5-star review on Apple
Podcasts.
Got questions about what we talked
(41:10):
about or research that you want to
share?
Send an email to contactatproveitpod
.com.
The views and opinions expressed in
this podcast are those of the host and
its guests and do not necessarily
represent the official position,
opinion,
or strategies of their employers or
companies.
Examples of research and data analysis
discussed within this podcast are only
(41:30):
examples.
They should not be utilized in the real
world as the only solution available as
they are based on very limited,
often single use case,
and sometimes dated information.
Assumptions made within this discussion
about research and data analyses are
not necessarily representative of the
position of the host, the guests,
or their employees or companies.
No part of this podcast may be
reproduced,
(41:51):
stored in a retrieval system,
or transmitted in any form or by any
means, mechanical, electronic,
recording,
or otherwise without prior written
permission of the creator of the
podcast.
The presentation of the content by the
guests does not necessarily constitute
an active endorsement of the content by
the host.
you