You are on page 1of 2

LOPA = Layer of Protection Assumptions !

January 22, 2017 • 35 Likes • 7 Comments

I've just watched neuroscientist Dr Daniel J. Levitin talk about information bias and warning us against
being too skeptical or too gullible about the data ("news") we receive.

On the show he gave the simple example of the importance of margin of error in statistics by
proclaiming that he has blue hair with a margin of error of 100% and that any statistics without a
declared margin of error were not to be trusted.

The essence of the discussion was about fake news and the failure of the pollsters to predict recent
unexpected outcomes, however his use of the word gullible really hit home with me that we (as an
industry) are perhaps too gullible in believing the generic data we use when we don't have our own.

Nearly 2 years ago, I sketched out a post on (what might now be called) the margin of error in LOPA
and the potential to accumulate inaccuracies leading to a potentially optimistic outcome i.e.

we think we are safer than we actually are.

Those of you who "get your hands dirty" with LOPA will recognize that statistically significant &
relevant data is as rare as unicorns tears or rocking horse shit so the "least worse" alternative is to use
generic data from OREDA, Exida, PERD, Faradip, TNO, CCPS LOPA Book or 61511.

Those of you who also check other peoples LOPA will recognize these numbers (apparently events
only happen every 10 years and protection/people only fail 1 time in 10 or maybe 1 time in 100 if we
are "lucky") and their apparent (or actual) lack of provenance.

The 7 year itch is now the 10 year glitch.

Sure, LOPA is not an exact science and, although I didn't get much feedback on the original post, one
comment challenged me that LOPA is not quantitative therefore "close enough is good enough" - OK
but if you use LOPA to screen out high consequence/risk scenarios that you might wish to subject to
QRA, if your screening is too coarse (in the same way that poorly calibrated Risk Matrices or Graphs
can fail to pass scenarios forward to LOPA) then you may miss one or two if you apply the tool
slavishly without thinking what these situations and numbers really mean.

It's too easy to get dazzled by scientific notation and really small numbers and, for example, fail to flip
(invert) frequencies and reflect on the prediction that this means once in 15 years, but I'm sure it's
happened several times since I've been here. Just because the people in the room don't recall it
doesn't mean others on site haven't experienced it.

There is (quite rightly) a focus on proven in or prior use for Safety Instrumented Systems or Functions
- but who's giving the same attention to the other IPL and the Initiating Events, Enabling Conditions
and Conditional Modifiers - do they not deserve the same scrutiny ?

Now, a few years on and older, wiser and wider (connections), I'm throwing out the challenge again.

Who is using sensitivity analysis in their LOPA

If not - why not ?

What other ideas are out there that we can all learn from ?

Let's not be gullible about other peoples data or complacent about the impact that the margin of
error could have.

Don't be afraid to speak out - if it doesn't look or feel right then it possibly or probably isn't and that
margin of error could literally be the difference between life and death.

I'm ready to receive all comments and criticisms - this is too "sensitive" (pun intended) to shy away
from. If I'm wrong, I'm wrong but let's have the debate ! I'm also open to the fact that over-estimating
frequencies or probabilities may lead to "over-engineering" and excessive protection (can you have
too much of a good thing ?) that challenges profitability, which is - let's face it - the reason that
business undertake risky activities.

You might also like