Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:07):
The unsung heroes of safety aren't always in a rule book.
Often it's the continuous performance of people plugging
the gaps when our systems fall short.
What are some better ways to consider the roles of people
within various systems? Grab a coffee, this is a longer
pod. Good day everyone.
I'm Ben Hutchinson and This is Safe as a podcast dedicated to
(00:31):
the thrifty analysis of safety, risk, and performance research.
Visit safetyinsights.org for more research.
Today's paper is from McLeod 2017, titled Human Factors in
Barrier Management, Hard Truths and Challenges in Process Safety
and Environmental Protection. This paper discussed some hard
(00:53):
truths in the assurance of humanperformance.
It said human performance continues to be relied upon as a
control. IT organisations may have
miscalibrated ideas on how humanperformance can be relied upon
when needed or how. And normally, human performance
is the overwhelming reason for successful outcomes even in the
(01:14):
face of poor systems and resources.
Therefore, organisations may struggle to ensure that the
performance of people that they rely on can reasonably be
expected to happen when and where as needed, and that their
expected controls are as robust as expected in the face of human
performance variability. First, a common claim following
investigations is that if only people had followed the rules,
(01:37):
then the incident wouldn't have happened.
But this assertion relies on some implicit assumptions that
the organisation actually had all of the required procedures
it needs. The procedures are specific,
accurate, clear up to date or even valued by people.
The people have the knowledge, skills and training to know what
(01:59):
procedures to leverage and when,that they'll accurately
recognise the situations that call for that procedure and then
carry them out under the conditions that exist at the
time. Where procedures are almost
always written for a perfect world, they operate often in a
vacuum that doesn't exist in reality.
It's said that some hard truths about how people see and
(02:20):
interpret the world and respond are hard because they can be
difficult and inconvenient to design and manage for.
But being hard doesn't make themless valid or less important to
address. So some of the hard truths
discussed by the author are human.
Emotion, thought, performers, and attitudes are all highly
situated. They're influenced by the
(02:42):
situational context at the time.Design and layout of work
systems, the equipment, interfaces and the environment
influence how people sense and respond in the world.
People optimise their performance, even if it may be
riskier in hindsight. And people are not necessarily
rational in a classical sense. For instance, type 1 and type 2
(03:02):
thinking and pattern matching and heuristics, or how sensitive
people are to loss aversion. This isn't a bug though, it's
just a feature of humans. So let's cover some of the
definitions that we're using in this paper.
In an early episode I said I'd cover more precise definitions
of controls, etcetera. Well, here we are.
So control means measures that are expected to be in place to
(03:24):
prevent incidents. Controls are comprised of
barriers and safeguards. Barriers are types of controls
that are assessed as being sufficiently robust and
reliable, and they can be reliedon as a primary control measure
against incidents. They can be passive or active
and be a combination of human elements and technology and
(03:44):
other engineering elements. In contrast to a barrier,
safeguards are controls that support and underpin the
availability and performance of barriers, but cannot meet the
standards of robustness or reliability to rely on as a full
barrier. So in other words, barriers are
at the highest tier, whereas safeguards are often things that
(04:07):
help support barriers or things that really don't meet the same
definition. They're not as reliable.
Another distinction can be made around human barrier elements or
organisational barriers. This is when the company
explicitly prescribes how decisions, how to be taken, what
is to be done by means of rules or instructions and procedures.
(04:28):
There's also operational barriers.
These are when there's no specifically prescribed manner
of deciding or acting where the individual is given quite a bit
of discretion to take appropriate action.
This relies more on operating skills and capabilities.
Right this second there's some people furiously typing an angry
letter to me about the ICM Ms definition of critical controls.
(04:50):
I like that approach too. But remember, town is big enough
for more than one typology, and the approach presented today has
decades of support from offshoreoil and gas.
But whatever flats you wait next, the order covers some of
the criteria for robust controls, particularly barriers.
Remember several factors should be meant for controls to be
(05:13):
classed as full barriers. These are must be specific to a
single potentially hazardous event, so specific it must be
independent of other protection layers, independent if you
recount it on to do what it was designed to do, dependability.
It's capable of being audited orobserved auditability.
(05:35):
Now this is a proviso there. These make a lot of sense for
engineering systems, but are a little bit more challenging when
we're coming to human performance.
For instance, assuring true independence with human
performance is a major challenge.
Factors like workload and fatigue and distraction can
defeat multiple controls. Organisational factors can
(05:56):
influence control performance, like incentivizing certain
outcomes or contractual arrangements.
Independence achieved by having double cheques by another person
may also not be truly independent since it's also
affected by a range of factors. This has also been called the
fallacy of social redundancy. So there's a range of these
factors and are said to often get overlooked when deciding how
(06:18):
to assure human performance. According to the author,
judgments about the likely effectiveness of controls that
rely on human performance often aren't clear about what is
actually expected or intended from people.
So we often don't really clarifywhat we expect of human
performance, if it's acting as some sort of control, and how we
(06:39):
even would assess the effectiveness.
Things like the design of the work environment or equipment
interfaces. If the control relies on someone
opening or closing a valve and aclear intention is necessary
that people will know which valve to operate, how, when and
why to operate it, and the valveneeds to be designed and
labelled in a way to minimise the chance of people not
(07:00):
operating it out of sequence. The author draws on guidance
from other human factors organisations and then it's
argued that most organisational measures should be treated as
safeguards rather than as barriers because they just don't
meet that higher reliability andeffectiveness.
So safeguards would include local warnings and science or
(07:22):
design and implementation of alarms, human machine
interfaces, job designer. More organisational safeguards
are more to ensure that the barriers that are expected to
function are not degraded or defeated by other factors.
If you're familiar with bow ties, then things like
escalation or degradation factors would be more akin to
(07:43):
safeguards. So, quoting McLeod, safeguards
cannot and do not need to provide the same level of risk
reduction as barriers. Nevertheless, safeguards should
still have clear ownership, be capable of being audited and be
traceable to some elements of the organisation management
system. Importantly, our control may be
(08:04):
a barrier in one situation but may be treated as a safeguard
elsewhere in the organisation ifthe company is unable or
unwilling to invest the resources to ensure that that
barrier functions to the necessary specifications.
Again, the author draws on some guidelines from a human factors
organisation and draws out eightconcerns with human
(08:24):
organisational factors. So, top events in a bow tie.
The top events are situated too far to the right, where the
events that are sought to be avoided are too close to the
consequences. Too many barriers are
identified, many of which don't meet the accepted criteria for
being a barrier. In fact, most of the controls
that rely on human performance meet the definition of
(08:46):
safeguards rather than as full barriers.
Human and organisational factorsare rarely incorporated into
barrier models, and ideas of cognition and complexity are
also rarely incorporated into the performance of barriers.
Workers imagine verse work has done rarely way into
considerations. Again, we often have a perfect
world view around procedures andcontrols.
(09:07):
They'll work first time every time.
Human error is frequently identified as a threat in the
bow tie, and then barriers are identified to block the error
from leading to the top event. Remember, this is a myth that we
should be trying to avoid, not an endorsement that we should be
seeing human error. In that way.
The implicit expectations of human performance is rarely made
explicit and barrier models are often designed implemented to
(09:33):
the workforce in a manner that doesn't really properly support
their operational use. We do safety to them rather than
with them. One interesting point from the
paper is how human error shouldn't be identified as a
threat in the bow tie analysis since it creates A misleading
impression that the risk of human error is being adequately
managed by barriers. Further, according to the paper,
(09:56):
it can promote a focus on minimising human performance
variability over recognising thereal barriers and ensuring that
they are as robust as can be. Focusing on human error in a
threat line also removes human performance factors out of their
context. Critically, treating people as a
threat in the bow tie also misses the opportunity to
develop a deeper understanding of the ways people provide
(10:19):
flexibility and adaptability andtherefore contribute to the
system resilience. Therefore, seeing human and
performance variability as threats reinforces a negative
view of people as unreliable factors to be managed.
Instead, more focus should be directed towards understanding
the performance requirements of the interactions of people and
(10:39):
technology and what's needed to ensure the robustness of
barriers and their escalation factors, which I would argue
heavily involves learning from normal work and work analysis
methods, etcetera. Next, it's argued that barriers
should consider the following factors.
It should be considered the performance of the barrier and
what it's expected to deliver, and that should be specific to
(11:00):
the threat in the situation. Who's involved in delivering the
performance? Who detects the barrier?
Who decides what needs to be done?
Who takes action? What info is needed for the
successful performance of the situation?
What decisions or judgments are likely to be involved?
What actions need to be taken and how will the operators know
whether the actions have been successfully completed?
(11:22):
Do they ever receive feedback during the task?
Are there any other technical ornon technical guidance to be
followed? Next, the paper also looks at
the standards for successful forms of the barrier, arguing
that this should include the maximum allowable time to detect
an event to trigger the function, the accuracy of
interpreting the events, the maximum allowable time to
(11:44):
initiate a response, the acceptable reliability,
tolerance limits for acceptable performance, and some other
factors. So we're right at the end.
In all, the paper argues that people are nearly always a
positive element in complex sociotechnical systems.
The objective should therefore be to strive to make people as
reliable as possible by setting them up to succeed.
(12:08):
Make it as easy as possible to do the right thing and as hard
as possible to do the unsafe, hazardous thing.
Organisations operating as complex socio technical systems
should seek to ensure they have in place the necessary systems
and support structures and should design and operate their
activities in ways that allow people to be as productive and
adaptable as they can be. That's it on Safe As.
(12:32):
I'm Ben Hutchinson, please help share, rate and review and
checkout Safety insights.org formore research.
Finally, feel free to support Safe As by sharing a coffee link
in the shy nights.