Professional Documents
Culture Documents
Volume 1
Introduction
Version 1 Issue 1
January 2005
Taylor Associates ApS
Preface
This report is the first of several volumes intended to provide a sound basis for
risk assessment calculations to be used for land use planning and emergency
planning. Studies by others have shown that risk analysis results can vary by as
much as a factor 10, even when high quality methods are applied and can vary
by as much as a factor 100 in benchmark studies. This study is intended to
determine why this is so, and provide guidance on choice of method and
choice of parameters.
Updating history
Contents
At the outset of the author’s own use of QRA, starting in 1972, the dangers of using risk
assessment were apparent. In particular, the possibility of overlooking a source of risk, and
thereby contributing to causing an accident, was an important motivation. The question of
choice of methodology became acute when the author, together with O.Platz, was requested
to make a comparative risk analysis for all of the Danish plants falling under the major
hazards directive (Seveso I directive). Concerns about the possible oversights and errors have
led to a series of studies over a period of 30 years, the results of which are summarised in
these volumes.
Less clear, but still quite a good demonstration of the value of the techniques are cases in
which clear and obvious improvements in safety have been justified by means of risk
assessment. Examples from my own country are the introduction of advanced passive fire
protection for LPG storage tanks, and the replacement of a large cryogenic ammonia storage
tank in a city centre by a cryogenic ammonia pipeline.
QRA is especially useful where large impact hazards exist, which require large investments in
safety equipment or other safety measures. Out of about 100 land use planning projects
undertaken over half were “difficult cases, where fairly large investments in safety needed to
be justified” (The remainder resulted in simpler decisions, such as safety distances which
were well within the feasible range, and projects where large reductions in risk were achieved
with fairly simple and inexpensive means. None produced results where no risk reduction
was necessary). Safety decisions costing over a million dollars are today only rarely made
without the use of some form of risk assessment, nor should they be.
Despite the successes, the current state of the art in risk assessment is far from problem free.
Risk analysis has long been an art, with wide variations in results arrived at by different
practitioners. As the saying goes, it requires a strong moral character to sell elastic by the
meter. Unfortunately, “favourably weighted” results are produced not only by analyses which
are less than honest. Favourable results can also be produced by analysts who are too
inexperienced to recognise important hazards, or insufficiently competent to recognise that
standard methods are in a particular case inappropriate. Of course such problems can in some
case give needlessly unfavourable results in a land use planning project, as well as giving
unjustifiably optimistic ones. As an example of this, two of the standard gas release
dispersion programs made available by well respected authorities, lead to differences in safety
zone distances of over a factor of 2. These differences apply even when the programs are used
correctly. It is still apparent that the start of the art is very dependent on the artist.
In order to be useful in the legal systems of land use planning, the methods need to be:
- Reproducible, so that one set of analysis results is not subject to immediate
challenge by counter expertise from an alternative set of experts.
- Reasonably reflective of actual risk conditions, so that analyses do not result in
counterintuitive or directly and obviously inappropriate recommendations. A
certain factor of uncertainty is acceptable, but not when it leads to a reversal of
ranking orders of preferred planning solutions, and definitely not when it leads to
safety planning distances being described as “between one and three kilometres”.
- Transparent, so that ordinary persons can see that the analyses are reasonable, and
detect the cases where the analysis does not reflect the conditions on the ground.
The needs for reproducibility can be met by providing a fixed algorithm for the analyses, as
for example in the Dutch Yellow, Green and Purple Book system (ref 2.), the RISKAT
analysis method (ref. 3) or the collection of methods given in the CCPS guide (ref. 4). The
desire for transparency can be met by supplementing the algorithm by means of a good
pedagogic reporting practice, and by illustrating the analyses with examples of earlier
accidents (there are unfortunately plenty of them). However, the use of fixed algorithms
means at the same time a fairly rigid system, which can lead to counter intuitive results,
especially when the project lies outside the originally intended area of application of the
algorithm and data base.
Examples of major accidents which have occurred both within and outside the scope of
standard risk assessment methodologies are given in Volume 4.The solution to this problem
is fairly straightforward – add the problem types to the required list of accident types to be
considered in land use planning assessments.
The release frequency values available in data bases or standard references have been
collected in the best fashion possible at the time when the methods were established. The
accuracy of the data, though, is less important than the question of appropriateness of the data
in a specific case. Some examples can be given of just how important this question can be:
- Risk assessments for road transport of hazardous goods very often refer to the
authoritative study carried out on behalf of the UK HSC (ref.5). Most of the
frequency values in this report are based on collections of data which are directly
relevant to British hazardous goods transport and are well supported by data from
the actual systems concerned. Just one number in the report has a doubtful
pedigree, namely that for road transport pressurised tank truck accidents. The
value given was originally derived from US LPG transport data, and adapted to
apply to British transport, on one specially chosen route, using unhardened carbon
steel tanks. The adjustment from the US data reduced frequencies by a factor of
400. (see ref. 6 for similar data). When a full set of data were investigated, in
connection with a particular study, it was found that road tanker accident
frequencies with large releases could be a factor 1000 higher, depending on traffic
and road type, number of rail crossings, and on used of thin hardened steel tanks.
The question here is not one of uncertainty – most of the data sets compared were
both extensive and relatively free of confounding factors. The question is one of
use of appropriate data.
- Calculations were made for the frequency of large oil releases from piping in a
refinery. A fairly good, and well supported value for piping failure rates is 30*10-6
per m. year. (Literature values vary by a factor of up to 20 below this. The values
given in the Rijnmond study, ref 2., for example are 5*10-6 per m. year). When
the was applied for a refinery, a values of 6*10-2 major releases from piping per
year was calculated. The company pointed out that this value is very optimistic,
and that they had 2 major releases in 10 years. Investigation showed that the two
releases arose due to an erroneous delivery of steel elbows, of the wrong quality,
for a highly loaded pump. The question then arises – to what extent are problems
such as design error, wrong choice of steel, poor pipe support etc. included into
the data bases used for risk calculations ? Certainly not sufficiently to cover this
kind of case.
One problem which occurs quite often in land use planning is that in industrial areas, there
may be several companies, each contributing to risk, with their risk “footprints” overlapping.
Should each company be allowed to contribute to the maximum of the risk acceptance limit?
In this case the individual risk for neighbours could grow, in some practical cases by as much
as 10 times that set as the original limit. Or should there be some kind of “risk rationing” in
an industrial area? In this case, should a very small formulating and repackaging company be
allowed to have the same size of risk footprint as a large refinery?
Safety management
As a final issue, consider the impact of safety management on risk. Poor safety management
can increase risk by factors of many orders of magnitude. As an example, in one plant the
maintenance manager was reluctant to have anything to do with risk analysis, and also lacked
a good deal of knowledge about safety systems design. Several valves introduced as a result
of hazops were removed from the plant, to avoid the need for maintenance. As a result, and
explosion occurred just five years after plant commissioning. The original calculated
frequency of such explosions was 4*10-5. The direct cause was lack of the valves, the root
cause lack of knowledge and poor safety attitude. No risk assessment has any validity if
problems of this kind exist.
In order to take this kind of dependency into account, some groups have introduced a
“management factor” into the risk assessment calculation (ref. 6, 7 ). Whether these factors
can cover the full range of variation seen around the world can be doubted, but at least the
problem is acknowledged, and some weighting given to it. Others have argued that the
problems of poor safety management should be solved, rather than calculated. From study of
accident records from the US RMP data base it appears though that safety management issues
dominate the pattern of risk, even for plants in USA, where plants have been subject to US
OSHA/EPA regulations on safety management for ten years. (see fig 1). One could perhaps,
on the basis of the data, take standard equipment release rates and multiply by a factor 3, to
guarantee that safety management were taken into account. This approach though overlooks
the wide swings in both engineering design and safety management standards which exist, as
will be seen below.
Other
Management
Weather
Materials
Design
Maintenance
Bypassing
Procedures
Human error
Equipment
0 5 10 15 20 25 30 35
One problem with all of the currently published risk assessment methodologies is that they do
not take into account the efforts made by plant engineers in reducing risks, except in a few
highly standardised areas such as fitting of emergency shutdown valves, or mounding of LPG
vessels. Current published methodologies do mot provide methods which allow for high
quality piping, for risk based inspection(!), for high states of readiness if fire fighting, for
fixed fire protection systems, for water curtains, and many other small details. It is hard to
describe how frustrating engineers find this. Trying to improve safety, and then to be given no
credit for the result, results in a very negative attitude. One could ignore this frustration in
regulation, of course, but the really bad result is that companies direct their attention
elsewhere, often to less effective methods of risk reduction for which they can be given
credit.
At the worst, this can lead to an increase in risk. An example is the insistence of some
methodologies on giving credit only for passive safety measures. The US RMP guidance on
offsite consequence assessment, for example, bypasses this issue in its worst case
calculations, by requiring scenarios with total release of vessel inventory within 10 minutes.
This ignores the fact that a good modern ESD system, designed and maintained to SIL 2
standards, reduces risk by two orders of magnitude. Even worse, it ignores the fact that
reducing piping diameter by a factor of two reduces hazard distances typically also by a factor
of two, for nearly all realistic scenarios having effects outside the plant.
Unfortunately, it is rare that such repeatability can be achieved. The reason is that there is a
very large number of assumptions involved in any risk analysis, from the basic data
concerning vessel contents (Is the tank always full, or most often nearly empty ?) through to
details such as the type of soil and its water content, important for conductivity calculations.
While all critical assumptions could in principle be recorded and checked, the number of
these in a high quality analysis is so large as to make such recording impractical.
In order to overcome this difficulty during the present investigations, a computer program,
QRA Pro was written. This program allows each parameter in every model to be recorded,
and allows the underlying assumptions behind each choice to be recorded. A large number of
standard choices is provided, in order to keep the work load in recording to a minimum. The
program also allows all of the calculations to be carried out automatically, guaranteeing
reproducibility [ref 30.] (This kind of reproducibility has been demonstrated in practice. It
turns out to be extremely useful in making comparisons, for example for before and after
assessment of risk reduction measures). The program provides a very large number of
different models and input parameters, and can be extended with special purpose sub
programs, so that flexibility in analysis is not restricted to any large extent.
A further issue here is what we mean by accuracy. A usual criterion for a scientifically based
model is that results calculated with the model should agree with experiment or observation.
In risk assessment, we cannot carry out experiments on full scale accidents, and we certainly
cannot investigate a full range of accident scenarios in this way. We are forced to rely on
observations from accidents which have actually occurred. Only a few of these however have
been documented in depth, so our assessments of accuracy and uncertainty will themselves be
uncertain. Nevertheless, there are some cases in which risk assessments can be validated, as
will be seen in later volumes.
Comparing risk calculations made with industry average data against actual accident
frequencies means that we are aiming at “accuracy on average”. If we have a procedure
which provides good results of this type, then we will be able to predict the average accident
rate for a number of similar plants, and find that this agrees with experience.
Unfortunately, this kind of accuracy is rather unsatisfactory. As inspection audits and accident
records show, no two plants are ever identical, and risk in two nominally identical plants can
vary in practice by a factor as large as 100 (see e.g. ref. 33) Ideally, we should have “ plant
specific accuracy”. This means that if the same (unchanging) plant is observed over a
number of years, the observed accident frequency should agree with predictions. Such a test
could obviously never be carried out in practice, if only because accident frequencies increase
with plant age, and fall as a result of the risk analysis process itself in any properly organised
company. Nevertheless, some aspects of plant specific accuracy of risk estimates can be
made, especially if causal analyses are made of near misses and potential accident initiating
events.
Plant specific accuracy is desirable because it is often the weak points in safety defences of a
plant which actually give rise to accidents. A risk assessment which is “accurate on average”
may fail to identify weaknesses, or even worse, serve to reassure when reassurance is
unjustified. Risk analysis should not become a cushion to sleep on.
Achieving plant specific accuracy requires significant extension to current risk analysis
practices. Some factors which will be important in this sense are:
The first studies undertaken were those for completeness of hazard identification, starting in
the 1970’s with comparison studies of HAZOP analyses carried out by different persons and
using different approaches. (ref. 33) These were followed up with a review of a large number
of Hazop studies carried out by various teams of consultants and company staff in the early
1990’s (ref. ). These studies have been followed up with a review of the currently most
effective methods, with results reported in volume 2 of this report.
The data used for risk assessment, particularly release frequencies and hole size distributions,
is critical to the performance of risk assessment. High quality data has become available
during recent years for offshore plant, but most data for chemical and petrochemical plant
onshore has been derived by engineering judgement, and often is secondary data which can
be traced back to studies carried out in the 1970’s.
In order to solve this problem, an exhaustive in depth study was carried out of release data,
covering about 12000 plant years of experience. A causal analysis was also made, so that the
effect of changes in engineering and integrity management standards can be calculated. QRA
results using this data have been compared with results using data which are drawn from the
literature. These results are discussed in volume 5 of this report.
There is still a wide uncertainty in the choice of consequence models used in risk assessment.
These in part reflect experimental uncertainties, but it was found in this study that many
models used in present day risk assessments are actually inconsistent with well established
experimental evidence. In order to assess the impact of this, a full scale risk calculation
package was written, in which alternative models can be chosen. A sensitivity study was then
carried out, to determine the importance of the choice of model in determining risk analysis
results. A study was also carried out comparing the results of calculations with those from
experiments, and with observations from actual accidents. The results of these studies are
given in volume 3 of this report.
The actual methodology used for risk assessment affects the results. In particular, there are
the following differences between standard published methodologies:
In order to investigate these aspects, these aspects, an extended sensitivity analysis was
carried out. Six methodologies were rigorously defined, and applied to six different “virtual”
plants. This allowed detailed sensitivity analysis to be carried out, both on variations in
methodology, and variations in plant design. The plants were a 300000 bbl per day refinery, a
fertilizer plant with ammonia and sulphuric acid production, a speciality chemicals plant, a
pharmaceuticals plant, a chemicals warehouse, and an LPG storage terminal. Flow sheets,
piping and instrumentation diagrams and plant layout were developed for these. Three
different sitings were investigated for the plants, one typical of Central and South America,
one typical of the Middle East, and one European. These different sites affect the distance to
other industry and to population centres, average temperatures, and wind speeds.
The methodologies chosen for investigation were based on the Dutch “Purple Book; a
methodology based on deterministic criteria; one which is based on quantified hazard and
operability analysis; one based on the CCPS guideline; and an upgraded QRA methodology
which uses US statistics for plant accidents collected under the RMP rule. The methodologies
differ in the degree to which they use plant failure rate data. In some methodologies accident
frequency data is only given for “vessels, tanks, hoses”,i.e. a limited list. In the last
methodology, accident frequency data were obtained for 60 different equipment types.
Upgraded consequence models were also used, based on reports published by UK HSE and
by Shell (see Vol 3 for a full list). Results were checked using four different consequence
calculation packages, the main calculations being done using the author’s QRA Pro
consequence calculation suite, which was upgraded for the project, in order to allow a range
of different consequence calculation methods to be investigated. The results of these studies
are given in vol4 of this report.
In order to investigate the effect of safety engineering, each plant was analyzed with three
assumptions about engineering standards. The first used practices based on US and
international standards from the 1970’s, the second made use of modern US international
standards and especially modern oil company standards. The third made use of “high integrity
engineering” principles. Each of these sets of assumptions was rigorously defined, by means
of a design handbook. Plant layout and spacing were also investigated. The effect of this are
considered in vol 5 of this report.
The effect of differing safety management standards was investigated by means of a detailed
model, based on an extensive review of accident cases, and on data from safety audits at a
large number of plants of the kind studied. The effects of these aspects are considered in vol.
6 of this report.
Because the plants investigated are “virtual” i.e. do not exist in reality, it is possible to review
the analysis results openly, without problems of commercial security arising. The examples
can therefore be investigated and reviewed by specialists quite openly.
4. Earlier work
Suokas and his colleagues carried out a series of studies in the late 1970’s and early 1980’s
on the completeness of hazard identification. (ref.31 ). These studies were parallel to, and
with similar approaches to those reported in volume 2, and resulted in a short monograph by
Suokas and Taylor on the pitfalls of risk assessment (ref.34)
There have been several earlier critical reviews of consequence calculation methods. Two
which are of special interest here are the review of the use of heavy gas dispersion models by
Hanna, Rivas and Chang , the review of gas dispersion by Kaiser. Descriptions of, and
references for these, are given in volume 3 of this report. A series of reviews by Deaves, Rew
and their colleagues, and by the Health and Safety laboratory, commissioned by the UK
Health and Safety Executive, provide an up to date review of a large range of models, which
have been used in this study as a basis for comparisons with the standard models, as given for
example in the Dutch Puple Book (ref. 14) The models described in these reviews have been
investigated alongside the standard models, in Ch. 3, and have been used as alternatives in the
full scale risk assessments in Ch. 4.
The published project most relevant to this study is the benchmark study carried out by a
consortium of specialist groups under the auspices of the European Community (ref. 6 ) This
project established a “virtual” process plant, in a similar way to the present study. The plant
(an ammonia fertiliser plant) was analysed independently by all the groups. The risk analysis
was carried out by in all nine teams, and their results compared. The initial set of results
varied widely, with some results deviating in critical parameters varied widely. The number
of fatalities for accidents occurring at 10-6 per year varied from 300 to 30000, and the
frequency for 100 person accidents varied from 5*10-6 to 2*10-4 per year. The ratio between
the largest safety distances (distance to 10-6 per year fatality risk) varied less, with a factor of
2 between the smallest and largest safety distances. In a second phase of the project, the
differences were investigated, and as far as possible resolved. (ref. 4, 5)
The results in the benchmark study have served as a guide to the present study. Even with
seven comparable studies, it is not possible to investigate all aspects of variation in a risk
analysis. The present study is intended to complement the benchmark, by providing a single
framework, in which many significant parameters can be varied independently.
Unfortunately, the benchmark studies did not provide a definitive answer to the questions of
repeatability and accuracy, or rather, they provided negative results, showing that the
uncertainties in analysis gave variations of up to five orders of magnitude. Such a result is
useless in engineering. The earlier studies however, have for the most part been flawed by
including unvalidated methods in the comparisons.
The QRAQ project takes use of a different approach. A fairly wide range of chemical plants
were analysed using a wide range of methods, but all within the same analysis framework.
This allows individual fractures of the analysis methodology to be investigated and sensitivity
analyses to be carried out. Also, many of the results could be compared with worldwide
experience of large accidents, so that the overall performance could be validated.
It may seem strange that an in depth study of QRA methodology and validity is made now,
after QRA has been in use for over 30 years. However, there has so far been little published
work of this kind producing anything but negative results. An exception is the RIVM
Benchmark study, which demonstrated reproducibility to within a factor of 2 when using a
fixed procedure and fixed set of models.
Study Description
Comparison of jet dispersion models Comparison of the Yellow Book (Chen and
Rodi), simple momentum jet, Quest, and Hoot,
Meroney and Peturka models with experimental
data, and sensitivity study of the effect of choice
of jet model on risk analysis
Comparison of blow up models Comparison of the models available for initial
dispersion of liquefied gases with actual
observations from accidents, and sensitivity of
risk analyses to the choice of model
Comparison of plume dispersion models Comparison of the widely used models, Cox
Carpenter, SLAB, Degadis, HG system, and
UML models and a newly developed model,
TAPlume, with actual accident data.
The study investigates not just the dispersion
models themselves, but also the source term
(release rate) and initial jet dispersion model
used.
Considers the effect of new models for
turbulence velocity in industrial and urban
locations
The study investigates the effect of choice of
model on risk assessment, land use planning,
and on emergency planning.
Gas dispersion in industrial and urban Compares widely used gas dispersion models
locations with the results from CFD modelling, in order
to assess near field effects. Provide
phenomenological models for impinging and
semi confined jets.
Selection of gas toxicity criteria Investigates the effect of choice of criterion,
LC50 only, a full range of LC criteria, and of
the use of time scaling of toxic gas exposure on
risk assessment.
Investigates the use of AEGL, ERPG, and IDLH
data on emergency planning.
Effect of choice of wind speed categories In risk analyses it is usual to choose one or two
wind speeds, and typically two or three stability
categories for risk assessments. This study
provides a sensitivity study on the effect of the
choice on risk assessment.
Pool spread model selection The study compares the effect of different
models, and different choice of pool limitation
assumptions, on the rate of evaporation of
volatile materials. A comparison is made with
observation from actual spills, and the effect of
this on risk assessment is made.
Study Description
Very low wind speed dispersion This study implements the recommendations
from the UK HSE review of low wind speed
dispersion, and determines the effect of this on
land use planning and emergency planning.
Indoor release The study investigates models for release of
gases indoors, such as ammonia from
refrigeration systems, and the choice of plume
dispersion source terms. Provides comparison
with actual accident data, and evaluation of the
effect of the modelling on land use planning and
emergency planning.
Ignition probability models This study compares five different approaches
to detemining ignition probabilities (Cox Lees
and Ang, IFAL, Purple Book, JIP and UK HSE
Atkins (Rew) models. Results are compared
with actual accident experience, and assesses
the usefulness of the models.
Pool fire model selection The study compares pool fire models with
observations from actual fires, and in particular,
the effect of heat radiation on emergency
personnel fighting fires. Provides conclusions
on proper location for installation of fire water
monitors
Jet fire model selection The study compares pool fire models with
observations from actual fires, and in particular,
the effect of heat radiation on emergency
personnel fighting fires. Provides conclusions
on proper location for installation of fire water
monitors, and on the value of upgraded
emergency shutdown systems and of passive
fire protection.
Unconfined vapour cloud explosion Compares TNT equivalence type models, the
modelling multi energy model in its Yellow Book form,
the UK HSE GAME upgrades to the multi
energy model, and phenomenological models.
Compares the model predictions with actual
accident data (Flixborough, Milford Haven,
Copenhagen), and determines the effect of
choice of model on land use planning and on
control room building design.
Effect of water curtains, steam curtains and Describes the available models, and their effects
water sprays from hoses on heavy gas on plume sizes. Investigates the effect on risk.
dispersion
Study Description
Effect of walls, berms and slopes on heavy Investigates heavy gas flow and the way this can
gas dispersion be obstructed or channelled. Provides
conclusions about the value of fitting protective
walls for the case of low wind speed.
Risk calculation for transport of hazardous Provides a study of the accuracy of different
material methods of calculating risk along road transport
routes.
Effect of safety engineering design standards Compares the effect of traditional design with
on risk modern high integrity design on risk for major
hazards plants. Compares the effect of modern
emergency measures on emergency planning.