You are on page 1of 22

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/272591459

The Deepwater Horizon explosion: Non-technical skills, safety culture, and


system complexity

Article  in  Journal of Risk Research · June 2013


DOI: 10.1080/13669877.2013.815652

CITATIONS READS

27 1,356

2 authors:

Tom W. Reader Paul O Connor


The London School of Economics and Political Science National University of Ireland, Galway
43 PUBLICATIONS   1,310 CITATIONS    134 PUBLICATIONS   2,370 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Promoting Safety Voice View project

Safety Culture in ATM View project

All content following this page was uploaded by Tom W. Reader on 30 June 2015.

The user has requested enhancement of the downloaded file.


This article was downloaded by: [LSE Library]
On: 06 January 2014, At: 04:00
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Risk Research


Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/rjrr20

The Deepwater Horizon explosion: non-


technical skills, safety culture, and
system complexity
a b
Tom W. Reader & Paul O’Connor
a
London School of Economics, Institute of Social Psychology,
London, UK.
b
Department of General Practice, National University of Ireland,
Galway, Ireland.
Published online: 29 Jul 2013.

To cite this article: Tom W. Reader & Paul O’Connor (2014) The Deepwater Horizon explosion: non-
technical skills, safety culture, and system complexity, Journal of Risk Research, 17:3, 405-424,
DOI: 10.1080/13669877.2013.815652

To link to this article: http://dx.doi.org/10.1080/13669877.2013.815652

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Journal of Risk Research, 2014
Vol. 17, No. 3, 405–424, http://dx.doi.org/10.1080/13669877.2013.815652

The Deepwater Horizon explosion: non-technical skills, safety


culture, and system complexity
Tom W. Readera* and Paul O’Connorb
a
London School of Economics, Institute of Social Psychology, London, UK; bDepartment of
General Practice, National University of Ireland, Galway, Ireland
(Received 18 January 2013; final version received 21 May 2013)

The explosion and destruction of the Deepwater Horizon (DH) was a


watershed moment for safety management in the US oil and gas industry. The
Downloaded by [LSE Library] at 04:00 06 January 2014

2011 National Oil Spill Commission investigation identified a range of opera-


tional behaviours and underlying safety management problems that were causal
to the mishap. Yet, to date these have not been systematically considered
within a human factors framework. To achieve this, we draw upon two
applied psychology domains that are highly influential within safety research.
First, we apply non-technical skills (NTS) (social and cognitive skills that
underpin safe performance in complex work environments) theorem to under-
stand operational activities in the lead-up and occurrence of the well blowout.
NTS research is used to develop interventions for training and observing
safety behaviours (e.g. decision-making, teamwork). Second, we apply safety
culture theory to understand how the organisational and industry environment
shaped the management of risk. Safety culture research is used to understand
and change the socio-technical constraints and enablers of safety activity in
high-risk workplaces. Finally, to integrate these perspectives, we take a sys-
tems-thinking perspective to understand the mishap. A common critique of
accident narratives is their failure to systematically consider how the compo-
nents of an incident interact together to escalate risk. From a systems-thinking
perspective, understanding the interactions leading to the DH mishap is crucial
for ensuring interventions are effective in preventing future mishaps. We
develop an accident model that captures the various interactions and system
factors leading to the blowout.
Keywords: Deepwater Horizon (DH); non-technical skills (NTS); safety culture;
accident analysis; systems-thinking; psychology

Introduction
Despite extensive research investigating safety within the oil and gas sector, serious
mishaps still occur. The 2010 explosion and sinking of the Deepwater Horizon
(DH) semi-submersible drilling rig in the Gulf of Mexico resulted in 11 fatalities,
and the world’s worst offshore oil spill. As the exploration and production of hydro-
carbons moves to deeper, remoter, and more ecologically sensitive settings, learning
from incidents becomes even more essential. To date, analyses of the DH mishap
have tended to focus on technical decision-making (BP 2010), the detection of

*Corresponding author. Email: t.w.reader@lse.ac.uk

Ó 2013 Taylor & Francis


406 T.W. Reader and P. O’Connor

blowouts (Skogdalen, Utne, and Vinnem 2011), and safety regulation (Steinzor
2011). The investigation by the National Oil Spill Commission (2011) highlights
behaviours and organisational factors underlying the incident. However, these are
yet to be considered systematically within a human factors framework; this is
necessary for developing appropriate interventions and refining theory on accident
causation. In this paper, we consider the DH mishap from a non-technical skills
(NTS) and safety culture perspective. Then, through applying a systems-thinking
perspective, we reflect on the role of the human factors-related problems underlying
the incident.
Mishap data often provides the catalyst for safety interventions; however, their
actual contribution to safety improvement can be limited. For example, data from
accident investigations often identify multiple contradictory contributory factors
linked to a mishap (Katsakiori, Sakellaropoulos, and Manatakis 2009), they inter-
pret behaviours out of context (Rasmussen 1997), lessons can quickly become
obsolete (Kirwan 2001), obvious problems are easier to identify than latent threats
Downloaded by [LSE Library] at 04:00 06 January 2014

(O’Hare 2000), and the influencers of behaviour are often poorly described (Law-
ton and Parker 1998). Analyses frequently identify causes that are not observable
in real-time, and ‘risky’ behaviours in one context can prevent mishaps in another
(Perrow 1999). A recurring critique of accident analysis techniques is their focus
on the individual components of an incident (i.e. the actions preceding a mishap),
rather than their interdependencies or systemic causes (Leveson 2011). System-
thinking approaches attempt to explain the accident process, and to understand
how the beliefs and actions of an operator are dependent on factors outside their
control or awareness. To be effective, interventions to reduce future events must
consider these interdependencies (Rasmussen 1997).
The National Oil Spill Commission’s (2011) investigation of the DH provides
rich data on why the mishap occurred. Specific instances of behaviour and
organisational management are described. Yet it does not frame these using relevant
psychology literatures, or fully consider system interdependencies. This is necessary
for (i) describing the underlying psychological dimensions (rather than specific
instances) of worker behaviour and organisational environment that led to the
incident, (ii) explaining how risky-activity occurred, and (iii) developing interven-
tions to reduce the likelihood of error or future mishaps. DH is a seminal event for
safety management in the offshore industry, and we apply psychology theory to
understand the behaviour and organisational environment underlying the incident.
We consider the incident using two of the leading approaches used to understand
safety-related activity in high-risk workplaces, and then adopt a systems thinking
perspective to integrate our findings.
The first, NTS, refers to the cognitive (decision-making, risk assessment,
situation awareness (SA)) and social (teamwork, leadership) skills that underpin safe
performance in high-risk workplaces (Flin, O’Connor, and Crichton 2008).
Psychology research demonstrates the importance of worker NTS for managing
safety emergencies offshore, although offshore research in the area remains quite
limited (O’Connor and Flin 2003). NTS research analyses the non-technical skill
requirements of a workplace through investigating accident data, and triangulates
this analysis through other data collection techniques such as observations, surveys
and interviews (Flin, O’Connor, and Crichton 2008). Based upon this analysis,
training, assessment, and other safety intervention programmes are tailored to the
specific NTS needs of a work domain.
Journal of Risk Research 407

The second, safety culture, is a broad concept for understanding how


organisational and industry management shapes beliefs and activities relating to
safety (Guldenmund 2000). It roots are in the psychology and anthropology
literatures, and the concept is used by researchers and industry specialists to
understand and assess organisational safety (e.g. through surveys, focus groups).
Interventions to improve safety culture focus on the socio-technical constraints and
enablers (e.g. ability to report safety incidences, attitudes towards risk) of safety
activity, and safety culture is a well-researched concept with the offshore setting
(Mearns, Whitaker, and Flin 2001). Safety culture and NTS are related concepts,
with safety culture influencing how personnel on the platform behave. For example,
based upon a survey of 722 UK offshore oil personnel, it was found that unsafe
behaviour is predicted by perceptions of production pressure (Mearns, Flin, and
O’Connor 2001). To integrate the findings of the above analysis, we adopt a sys-
tems-thinking approach to explain the mishap (Leveson 2011).
A case study approach was taken to the analysis of the DH mishap, using the
Downloaded by [LSE Library] at 04:00 06 January 2014

final report of the National Oil Spill Commission’s (2011) report as the primary
source of information. The review was carried out by two industrial/organisational
psychologists with experience carrying out safety research across a range of high-
risk industries. First, the events that lead to the mishap were identified from the
National Oil Spill Commission’s (2011) report. The researchers primarily focussed
on decision-making events highlighted within the report as contributing to the inci-
dent (see Table 2) and then explored the behaviours and context surrounding these
events. Through this analysis, the key stages that led to the mishap were identified,
and the interactions between the events were delineated. Second, the events (and
the interactions between them) were then considered and interpreted within the
context of the nontechnical skills and safety culture literatures (with specific
reference, where possible, to offshore safety research or other academic articles on
the DH).

The Deepwater Horizon mishap narrative


The Macondo Blowout occurred after an explosion on the DH rig (located 40 miles
southeast of the coast of Louisiana) on 20 April 2010. The rig was owned by
Transocean, and leased by BP. With 126 workers on board at the time of the
mishap, Transocean’s flagship drilling rig had been located at the Macondo site
since 31 January 2010. An accidental release of hydrocarbons onto the rig floor
occurred at 21:40. Ignition occurred 15 minutes later. The resultant fire was extin-
guished after 36 h with the sinking of the rig. The mishap resulted in 11 fatalities,
and serious injuries to a further 16 workers. Four days later, it was publicly
acknowledged by BP that a damaged well-head was leaking oil into the Gulf. After
four months, a relief well drilled 18,000 feet below the ocean surface permanently
sealed the well on 19 September 2010. Approximately 4.9 million barrels of crude
oil were believed to have been released into the Gulf. A list of offshore drilling ter-
minology necessary to understand the mishap is presented in Box 1, and the four
key stages stages leading to the DH explosion are outlined in Table 1. Consistent
with Reason’s (1997) accident causation model, active (e.g. risk-taking behaviours
on the rig) and latent issues (e.g. organisational culture, risk management, technical
design) are seen as having led to the failures of the well and blowout preventer.
408 T.W. Reader and P. O’Connor

Box 1. Terminology for understanding the Deepwater Horizon mishap.

Annular space The space between the well casing and the surrounding rock
formation
Blowout preventer A large valve used to control and severe the link between the oil
well and the platform
Cement plug A piece of cement used to seal the well below the seabed.
Cement slurry (with Cement with tiny bubbles of nitrogen injected into it in order to
nitrogen) reduce the weight placed upon the well. Unstable nitrogen foam
slurry can become porous, with nitrogen breaking out of the cement
Cementing process Cement is pumped down the well and up the annular space so that
it seals the production casing to the surrounding rock formation
Channelling Leaving gaps in the annular space (during the placement of the
production casing) through which hydrocarbons can flow up
Centralisers Components placed between sections of the production casing to
ensure optimal positioning of the casing in the well, and even
displacement of drilling mud from the wellbore
Downloaded by [LSE Library] at 04:00 06 January 2014

Drilling mud Fluid used to aid the drilling of boreholes into the earth (e.g.
keeping the drill bit cool, transporting rock cuttings to the surface)
Drill string Hollow sections of drill pipe consisting of drill collars, tools and
bits that are used to drill oil wells
Float valves Valves on the production casing that ensure cement and mud can be
pumped into the well without reversing direction and flowing back
up the well
Hydrocarbon zone Reservoir of hydrocarbons being extracted
(Well) kick Flow of hydrocarbons and/or water into the wellbore and up the
annular space (signalling a blowout is going to occur)
Lockdown sleeve A device used to lock in place the wellhead and production casing
Lost return/circulation This occurs when drilling fluids leaks into the surrounding
geological formations (instead of returning up the Annular space),
potentially increasing the fragility of the rock being drilled
Mud line The ocean floor where the wellhead is placed
Negative pressure test Used to assess whether, under conditions of low pressure,
hydrocarbons leak into the well
Production casing A continuous wall of steel placed between the hydrocarbon zone at
the bottom of the well, and the wellhead on the seafloor
Riser The link cable between the well and the drilling rig
Shoe track barriers Barrier for blocking hydrocarbons from entering the bottom of the
pipe
Spacer Chemicals used to separate drilling mud from cement slurry
Temporary The well is closed off to allow the drilling rig to be moved on, and
abandonment a smaller production unit to begin hydrocarbon extraction

NTS problems identified in the Deepwater Horizon mishap investigation


In the section below, we consider the key NTS-related problems identified as causal
to the mishap by the National Oil Spill Commission (2011). Consistent with
NTS-theorem, we consider contributory factors that are described at the ‘cognitive/
individual’ level and the ‘social/team’ level. Although a simple distinction, it is
useful for describing the types and causes of behaviours underlying mishaps, and
has been used to facilitate the development of offshore interventions (Gordon, Flin,
and Mearns 2005). NTS research normally focusses on the ‘active’ behaviours of
operational staff in safety-critical environments, and therefore, we focus predomi-
nately upon the NTS by personnel on or with responsibility for the drilling rig.
Journal of Risk Research 409

Table 1. Key stages leading to the explosion on the Deepwater Horizon (National Oil Spill
Commission Report (2011).

Report
Key stages Causes page nos.
1. The cement barrier used to isolate Errors in conducting and interpreting 101–107
the hydrocarbon zone at the bottom the negative-pressure test, creating the
of the well from the annular space belief that the cement job had been
failed successful
Errors in the design of the cementing
process
The use of an inappropriate foam
cement slurry to seal the well
Design of the temporary abandonment
which resulted in overly high-levels of
pressure being placed on the cement
job
Downloaded by [LSE Library] at 04:00 06 January 2014

2. Hydrocarbons entered the Failure of the cement job integrity 109–113


well and travelled up the riser Errors in monitoring and interpreting
real-time data displays showing signs
of a ‘kick’
3. Hydrocarbons on the rig Hydrocarbons were not contained, and 113–114
floor ignited diesel generators ingested and released
them onto deck areas where ignition
was possible
Deck areas lacked automatic fire and
gas detection systems, resulting in
equipment in potential ignition
locations not being shut down
4. The Blowout Preventer The cables linking the emergency 114–115
(BOP) used to seal the disconnect system (EDS) and the BOP
well, and prevent the were damaged by the fire
uncontrolled flow of hydrocarbons Failures in the maintenance of the
towards BOP (possibly of batteries) prevented
the rig, did not activate activation of emergency automatic
system for shearing the drill pipe and
sealing the wellbore

Cognitive/individual factors
Research to understand mishaps in complex organisational setting often focus on
the ‘cognitions’ (e.g. decision-making, risk assessment, SA) of the operators most
closely associated with a mishap. The DH investigation highlights a variety of
situations where problems in decision-making, risk assessment, and SA contributed
to the mishap.

Decision-making
The National Oil Spill Commission (2011) report highlights a series of operational
and managerial decisions which (either directly or indirectly) contributed to the
mishap. Many of the decisions identified refer to the design and management of the
Macondo well, for example the depth of the well or the cement casing that was to
be used (Hopkins 2012). Decisions involved participants distributed across locations
and various companies, and Table 2 lists key decisions identified as problematic.
Downloaded by [LSE Library] at 04:00 06 January 2014

Table 2. Key organisational decisions in the immediate lead up to the Deepwater Horizon mishap (National Oil Spill Commission 2011).
410

Date (all 2010) Primary decision-makers Decision Factors influencing decision-making Report page no.
February–April Halliburton cement Omission to investigate Saved time and costs, however the possibility that 101–102
design and BP team or consider negative the nitrogen foam slurry would be porous was not
results on the stability investigated. The BP team were not aware of the
of the foam cement potential problems with the cement slurry
slurry
9th April BP engineering team Cessation of drilling At 18,193 feet cracks started to appear in the well 93–94
activities at 18,360 feet formation. To avoid further damage, fractures were
plugged, and drilling to intended depth of 20,200 feet
was stopped
11th–15th April BP engineering team Placing and cementing The exploratory Macondo well had drilled into an 94
of a final production accessible hydrocarbon reservoir of at least 50 million
casing string to recover barrels, although the rock formations at the bottom of
T.W. Reader and P. O’Connor

oil and gas from the well were recognised as fragile


Macondo well
14th April BP engineering team Use of ‘long-string’ Computer modelling initially indicated that a long 95–96
casing rather than string casing could not be reliably cemented. A short
a ‘short-string’ string would have been easier to cement into place at
the Macondo well, however it was considered more
leak-prone over the life of the well
15th April BP engineering team Use of six centralisers, Use of six centralisers avoided risks associated with 96–97
rather than the alternative types (e.g. slip-on centralisers), and
specified 16 avoided costly time-delays in waiting for additional
centralisers. However, initial design calculations and
further computer modelling indicated that the
production casing would require more than six
centralisers to avoid channelling (which would allow
hydrocarbons to flow up the well-casing)
(Continued)
Downloaded by [LSE Library] at 04:00 06 January 2014

Table 2. (Continued).
Date (all 2010) Primary decision-makers Decision Factors influencing decision-making Report page no.
19th April BP engineering team Omission to investigate The BP and Transocean teams concluded that the 98, 102, 116
why four-times the pressure gauge being relied upon was broken, and
normal amount of further investigation of anomalous readings would
pressure was required to have delayed the cement job. However it was not
convert the float valves categorically known whether the float valves had
(for unidirectional converted
pumping of cement and
mud), or why circulating
pressure after conversion
was lower than expected
19th April BP engineering team Limiting the circulation Circulation of 350 barrels of mud reduced the 100
of drilling mud to 350 likelihood of a lost-return event and damage to the
barrels rather than 2760 well. However, the lower volume of mud meant
barrels technicians could not fully circulate and examine
mud from the bottom of the well for hydrocarbons
19th April BP engineering team Specifying a low flow This limited pressure on the well formation, but it 100
rate for pumping cement also reduced the efficiency of mud displacement from
into the well the annular space
19th April BP engineering team Limiting the volume of This reduced pressure on the geologic formation in 100
cement to be pumped order to protect the viability of the well. However,
down the well to extend this deviated from internal guidelines specifying that
500 feet above the cement should be pumped to 1000 feet above the
hydrocarbon zone hydrocarbon zone, and it increased the likelihood of
error in cement placement
19th April BP engineering team Use of a lighter nitrogen- Pressure on the geologic formation was reduced due 100
based cement formula to the cement being lighter, however BP had very
limited expertise in using this formula in the Gulf of
Mexico
Journal of Risk Research

(Continued)
411
Downloaded by [LSE Library] at 04:00 06 January 2014

Table 2. (Continued).
412

Date (all 2010) Primary decision-makers Decision Factors influencing decision-making Report page no.
20th April BP engineering team Rejection of the full suite Saved time and money, however it meant the 102
of tests for evaluating evaluation of the cement job was reliant on a far
cement job more limited set of data
20th April BP engineering team, Setting the cement plug Reduced the likelihood of damage occurring to the 103–104
with MMS approval 3300 feet below the mud lockdown sleeve, but placed greater pressure on the
line cement job at the bottom of the well
20th April BP engineering team Replacing 3000 feet of Use of seawater avoided mud contaminating the 104
mud in the well with cement plug whilst it set. However the lighter
lighter seawater for seawater placed greater pressure on the cement job at
setting the cement plug the bottom of the well
20th April BP engineering team Not installing additional Saved time and costs, but meant that that the cement 104
physical barriers to stop job at the bottom of the well was the only physical
T.W. Reader and P. O’Connor

the flow of hydrocarbons barrier stopping the flow of hydrocarbons up the


up the production casing, production casing
and displacement of mud
from the riser before
setting cement plug
20th April BP engineering team Use of a spacer created Saved time and costs, but had not been used or 106
from lost-circulation tested by BP previously and created additional work
materials that were used
to patch fractures in the
formation caused by lost
returns
20th April BP and Transocean crew Absence of further Saved time and costs, but meant that problems in the 105–108
investigation into cement job were not identified or understood
anomalous results during
the negative pressure test
Journal of Risk Research 413

To understand why these decisions occurred, it is necessary to consider them


(initially) from a SA perspective (e.g. to consider the awareness and understanding
of decision makers for information relating to key decisions). We then consider the
social factors that shaped decision-making.

Situation awareness (SA)


SA refers to the perception, comprehension and anticipation of future events in
complex and dynamic environments (Endsley 1995). Although the utility of SA as
a theory for conceptualising human cognition in real-life environments is debated,
research investigating offshore safety frequently utilises it to understand safety
failures (Sneddon, Mearns, and Flin 2006). In the discussion of the failures of SA
in the DH mishap, we have separated SA into the failures of SA with relation to
risk perception (i.e. the comprehension and anticipation of risk), and problems in
the process of forming SA (i.e. the perception of information relating to risk).
Downloaded by [LSE Library] at 04:00 06 January 2014

Although a rather crude distinction, it facilitates the delineation of SA-related


activity leading to the mishap.

SA (risk perception)
Judging risk is a key aspect of SA as it reflects how operators interpret information,
think ahead, and make decisions. The DH investigation highlights SA problems in
the real-time management of risk. Specifically, the investigation critiques the lack of
risk assessment on the DH, with the platform team not accurately assessing risks.
Risk research shows that problems in assessing risk are central to poor decision-mak-
ing prior to organisational mishaps (Wagenaar 1992) and that numerous factors influ-
ence how we consider and respond to risk. For example, subjectivity (how a risk is
framed), control (whether we have control over a risk), and expertise for understand-
ing risk are important (Finucane et al. 2000; Mack 2003; Slovic 1979). Formalised
methods of risk assessment are used to reduce biases in risk perception (Pidgeon
1991), and offshore research shows that risk perception is influenced by factors such
as safety climate, previous experiences of harm, work conditions, and management
commitment to safety (Mearns, Whitaker, and Flin 2003; Rundmo 1996).
Decision-making on the DH was often influenced by misperceptions of risk. For
example, the salience of causing long-term damage to the viability of the well (i.e.
production potential) outweighed consideration of short-term risks. The selection of
a ‘long-string’ production casing (used to secure the borehole and lower the drill
string through) by BP engineers to cement the well (National Oil Spill Commission
2011, 115, 123) is found to have been made to reduce the likelihood of future lost
returns. This was despite initial computer modelling showing a long-string casing
would increase the difficulty of the immediate cement job. Similarly, the decision
during the temporary abandonment phase to pump cement into the well at a low
rate (National Oil Spill Commission 2011, 116) was made to reduce lost returns.
Yet it created short-term risk through reducing the efficiency of mud displacement
from the well. Also, the use of foam cement slurry was taken to preserve the
long-term viability of the well (as it was light and less likely to cause damage), yet
it increased risk through creating great reliance on external expertise.
Risk assessment research shows the importance of structured risk analysis in
uncertain decision environments (Faber and Stewart 2003), and the National Oil
414 T.W. Reader and P. O’Connor

Spill Commission (2011) report highlights the informal nature of risks appraisal and
decision-making. An example is the decision by the BP engineering team, on
learning that only six out of 16 centralisers were available, to not perform a formal
risk assessments of using six centralisers. This decision was made despite it being
contrary to well design specifications, initial computer simulations, and staff
concerns. Similarly, the decision to set the cement plugs in seawater (due to fears
over mud entering the cement plug) rather than heavy drilling mud is critiqued, as
there was no precedence for performing this operation at such depth. Although the
decision was made to minimise risk, little formal analysis of potential new risks
was made (e.g. through the stress placed on the cement formation). Poor risk
assessment is seen as emerging from a drive to save time and resources and also
poor regulation and is judged central to the mishap. Yet, as discussed below, risk
assessments were also shaped by a range of other factors.
Downloaded by [LSE Library] at 04:00 06 January 2014

SA (process)
The investigation into the DH describes a number of instances where inaccurate SA
of the technical process of establishing the well contributed to the mishap. For
example, due to a managerial decision not to run a cement evaluation log (tests
used to assess the integrity of the cement job), assessments on the success of the
cement job were based on a limited set of information (National Oil Spill Commis-
sion 2011, 117). Evaluation of the cement job was based on an awareness of
whether fluid was flowing back up the well, and indicators of problems in the well
(e.g. during the negative pressure test (NPT)) were not acknowledged. The report
describes crew members as entering a form of confirmation bias (National Oil Spill
Commission 2011, 118–119). In particular, when the negative-pressure test team
were unable to retain a drill-pipe pressure of zero (indicating a successful negative-
pressure test), they instead focused and based their SA on information (pressure on
the kill line) that showed the negative-pressure test to be successful, ignoring
contradictory data.
Loss of SA is also specified as contributing to the blowout. Crew members on
the drill floor were not aware of dynamic and substantial changes in the status of
the Macondo well. ‘Kicks’ that signalled a blowout was imminent (National Oil
Spill Commission 2011, 120–121) were not detected, and over a 45-min period
various data (e.g. increases in drilling-pipe pressure, pressure differences between
the drill pipe and kill line) indicating an impending blowout were not acted upon.
The reasons for this appear to be the demands placed upon crew members on the
drill floor, whereby they were required to perform multiple tasks at once, resulting
in their attention being split. This was compounded by instrumentation and displays
used for monitoring the well not alerting the drilling crew that a kick had occurred,
and the detection of the blowout was only reliant on drilling crew awareness.

Cognitive/individual factors summary


The DH investigation found problems in risk assessment or SA as underlying
flawed decision-making. This is consistent with NTS research (Flin, O’Connor, and
Crichton 2008), and offshore research on risk assessment (Okstad, Jersin, and
Tinmannsvik 2012), SA (e.g. failures of information monitoring and recognition)
(Sneddon, Mearns, and Flin 2006), and information system design (Woodcock and
Journal of Risk Research 415

Toy 2011). Interventions to improve decision-making, risk awareness, and SA


problems offshore could focus on improving the decision-making skills of operators
(e.g. through systemising thinking, or refining communication skills), and informa-
tion collection and presentation techniques. They could also focus on improving
formal risk assessment procedures, and training for staff to recognise problematic
patterns and trends indicating uncertainty (Skogdalen, Utne, and Vinnem 2011).
However, considering the highly social nature of work offshore, it is unlikely that
interventions will be individual-focussed, and they must reflect social influencers of
activity. Specifically, team SA research shows the importance of considering social
dynamics in group assessments of risk (Reader et al. 2011).

Social/team factors
NTS research has long shown the impact of teamwork upon decision-making and
risk assessment in high-risk domains (Stout, Salas, and Fowlkes 1997). These
activities are crucial for maintaining safety offshore (O’Connor and Flin 2003) and
Downloaded by [LSE Library] at 04:00 06 January 2014

are considered in the DH investigation. Understanding and improving teamwork in


offshore settings is challenging, as decision-making occurs between various parties
(e.g. engineers, technicians, managers) who are not together, work across shifts,
belong to different organisations, and have different expertise and seniority.

Teamwork
The National Oil Spill Commission (2011) reports various instances of poor
communication and leadership between crew members on the rig (and with
operating companies). For example, the communication between rig and the
onshore support teams. Formal discussions were frequently not held for key opera-
tional decisions, and communication on well management was poor. The rationale
underlying decision-making were not shared or clearly documented, and risks were
not formally assessed by crew members (National Oil Spill Commission 2011,
120). Changes in decision-making were frequently made, yet BP well site leaders
and rig crew had minimal time to become familiar with these changes due to poor
communication. Critically for the blowout, communication between operational
decision-makers on the drilling rig was also problematic (National Oil Spill
Commission, 2011, 123). Frequently, operational staff were unaware of emerging
risks and difficulties, for example the negative-pressure test. This shaped their
understanding of the work environment, and meant decisions occurred with a
‘bounded awareness’ of the relevant information relating to potential risk (Simon
1991), leaving operators vulnerable to a mishap.
Furthermore, poor inter-organisational communication between Halliburton and
BP is also cited as key factor in the mishap (National Oil Spill Commission 2011,
117, 123–124). In February 2010, Halliburton specialists recognised the potential
for the cement slurry to be unstable. Yet no investigation was made into the slurry
design, and BP was not informed. A further test in April (prior to the scheduled
cement job) repeated this result, and Halliburton failed to communicate to BP the
possibility that the foam cement slurry was potentially unstable. Thus, decisions on
the cement job were made on an incorrect assumption of cement slurry design
stability. This again speaks to the shared nature of risk assessment, and a lack of
clear procedures for outlining communication between the two organisations. More
pointedly, it highlights the importance of considering system inter-dependencies
416 T.W. Reader and P. O’Connor

(within and without organisations) when developing interventions for improving


decision-making, as these were critical to the mishap.

Leadership
The National Oil Spill Commission (2011) also identifies problems in leadership for
technical and risk-related decision-making (e.g. communicating on the centralisers),
and overall management of the DH (National Oil Spill Commission 2011, 122).
Hopkins (2011), elaborates and considers safety leadership and communication on
the DH with respect to the leadership of four VIPs (two from BP and two from
Transocean, all experts on offshore drilling) visiting the DH rig on the day of the
mishap. The purpose of the visit was in part to reinforce safety. Despite the signs of
the well not having been sealed, and the potential for a well blowout, these were not
investigated or queried. In particular, Hopkins (2011) highlights the nature of the
flawed communication between the VIPs and the DH team as contributing to them
not identifying the problems that were occurring. First, the VIPs tended to focus on
Downloaded by [LSE Library] at 04:00 06 January 2014

‘slips, trips, and falls’ rather than focusing on other aspects of safety and risk assess-
ment. Second, the walkround was conducted in unobtrusive manner as possible in
order to not undermine crew. Third, safety questions were framed in a manner that
assumed things had gone successfully, limiting opportunities to discuss problems.
Furthermore, in considering the management of the accident, leadership again
appears important. For example, problems in command and control during the
evacuation sequence were critical (Skogdalen, Khorsandi, and Vinnem 2012).
The DH had a split chain of command between the Offshore Installation Manager
(OIM) and the vessel captain. Leadership depended on the status of the rig, whether it
was latched, underway, or in an emergency situation. Key and critical decisions relat-
ing to lifeboat launches had to be made, and although the responsibility for decision-
making lay with the Captain, crew members were unclear as to who was in charge
due to missing handover procedures. To some extent, the report finds leadership issues
(e.g. setting clear expectations, developing clear communication procedures, listening
to junior staff) as underlying the problems in communication, SA, and risk assessment
leading to the incident.

Team factors summary


The finding that teamwork and leadership problems contributed to the DH mishap
is highly consistent with offshore safety research. The Cullen (1990) report high-
lights similar issues leading to the Piper Alpha explosion in the North Sea in 1988,
with effective communication and leadership within teams and across shifts (and
companies) recognised as essential for preventing mishaps. Teamwork was influ-
enced by professional and social barriers (e.g. operational and contract, management
and technical staff) that created divergent perceptions on risk and unclear lines of
responsibility (Mearns, Flin, and O’Connor 2001). For these problems, Crew
Resource Management style-training is used to train control-room operator decision-
making competences during emergencies (Flin 1995), teamwork skills in offshore
production teams (O’Connor and Flin 2003), and group decision-making in deepwa-
ter exploration teams (Crichton 2009). Yet, these focus on quite specific problems
and situations, and mishap research shows that problems in team and leadership
communication often reflect aspects of safety culture.
Journal of Risk Research 417

Safety culture problems identified in the Deepwater Horizon mishap


investigation
The National Oil Spill Commission (2011) report considers how behaviours on the
DH were shaped by the organisations involved and the industry environment. Safety
culture refers to how an organisation manages safety, and workforce beliefs
and activities relating to safety (Guldenmund 2000). Safety culture is argued to
influence organisational safety through creating socio-technical constraints and ena-
blers (e.g. ability to report safety incidences) that influence employee beliefs and
behaviour in relation to safety. Numerous factors influence safety culture, including
explicit and tacit communication on risk, incident reporting systems, learning from
incidents, apportionment of blame, safety investment, emergency management pro-
cedures, and human factors training and awareness. Within the DH investigation,
various manifestations of poor safety culture are seen to underlie the mishap.

Production vs. safety pressure


Downloaded by [LSE Library] at 04:00 06 January 2014

The DH investigation identifies production vs. safety pressures as underlying


decision-making and operational behaviour. This is the classic indicator of safety cul-
ture (Flin et al. 2000), and although the Macondo well was not active, the deep-sea
drilling operation was highly expensive and pressure to ensure progression existed.
Many of the riskier operational decisions were made due a desire to save time, costs,
or ensure long-term viability of the well, and “without full appreciation of the
associated risks” (National Oil Spill Commission 2011, 223). It was believed by crew
members that operations could only be stopped if there was deemed to be an immedi-
ately threat to their own personal safety, rather than a threat to the integrity of the
drilling operation itself (Hopkins 2011). Furthermore, a survey of the Transocean
crew prior to the incident found some employees to fear reprisals for reporting unsafe
situations, and others felt staff shortages were limiting work completion. This reflects
safety culture research in the offshore oil and gas industry, with the risk assessment
and safety behaviours of offshore workers being shaped by beliefs regarding
organisational prioritisation of safety, training, knowledge of safety, the regulatory
environment, and organisational culture (Mearns, Whitaker, and Flin 2001). Along-
side the typical production-safety pressures found in safety culture investigations, the
report also identifies a number of other manifestations of poor safety culture.

Organisational learning from previous mishaps and near-misses


Learning from incidents is key to safety culture (Mearns et al. 2013), and several
directly relevant incidents from within BP and Transocean were not learned from or
incorporated into best practice guidelines for operator staff (National Oil Spill
Commission 2011, 219). For example, a gas line rupture on a North Sea BP
platform, and the focus on lost-time-incidents (rather than process safety) prior to
the Texas City refinery explosion. Crucially, a similar failure on a North Sea
Transocean rig (involving condensate release after a ‘successful’ NPT) was not
taken into account on the DH (National Oil Spill Commission, 2011, 124).
Regulators play a key role in aiding companies to learn from cross-industry failures
(e.g. through inspection), and the Minerals Management Service (MMS) is critiqued
for its lack of effectiveness. Through the relevant organisations not sharing or
emphasising information on previous near-misses the likelihood of a future blowout
was increased, and was indicative of less than optimal safety-culture.
418 T.W. Reader and P. O’Connor

Regulation
As discussed above, the role of the regulator is critiqued by the DH investigation.
Effective regulation shapes offshore safety culture through creating expectations and
norms on safety management (Cox and Cheyne 2000; Taylor 1979). The MMS
lacked the staff, resources, technical expertise (e.g. growing awareness on the
increased likelihood of blowout preventer failures in Deepwater conditions (National
Oil Spill Commission 2011, 74)), decision-making autonomy, and political influence
to regulate safely. Senior officials focused on maximising ‘revenue from leasing and
production’. This impacted upon safety culture through the following mechanisms.
First, the quality of external inspections were often less rigorous than internal safety
audits, focussing on quantity rather than quality (National Oil Spill Commission
2011, 78). Inspectors did not ask ‘tough questions’ and avoided reaching conclusions
that would increase regulation or costs (National Oil Spill Commission 2011, 126).
Second, contingency planning for DH disaster scenarios was inadequate (National
Oil Spill Commission 2011, 84), and there were ‘no meaningful’ regulations for
Downloaded by [LSE Library] at 04:00 06 January 2014

testing cement, managing well-cementing, or conducting negative-pressure tests


(National Oil Spill Commission 2011, 228). Where guidelines were available (e.g.
depths for installing cement plugs), exclusions were accepted.
The above issues in regulation are seen as contributing to an environment where
production was prioritised over safety; an underlying cause of the DH mishap. The
lack of ‘safety cases’ is seen as emblematic of the poor industry-wide safety culture
within which the DH operated. Safety cases involve operating companies validating
the effectiveness of their installation safety management systems through demon-
strating that hazards have been mitigated to ‘as low as reasonably practicable’.
Although previously rejected for the United States, there have been calls to intro-
duce safety cases in the United States in a manner similar to the North Sea
(National Oil Spill Commission 2011). However, it has been suggested that safety
cases are not compatible with US law, and regulators lack the resources necessary
to make a safety case regime minimally successful (Steinzor 2011). Ideally, this
would result in organisations outlining their safety management systems and proce-
dures in greater detail, alongside changing perceptions on the prioritisation of safety
and production.

Human factors engineering


The DH investigation highlights the problems in human factors engineering
underlying the incident. The extent to which organisations consider and engineer
systems to cope with human factors problems is highly symptomatic of safety
culture. Human factors management in a number of areas on the DH was seen as
poor. First, in terms of training, there was inadequate staff training for enabling staff
to manage and respond to emergency and safety-related situations. Second, in terms
of human performance limitations, operators performed safety-critical tasks requir-
ing vigilance for long shifts, despite the effects of fatigue and unfavourable compar-
isons with safety regulations in other national environments (National Oil Spill
Commission 2011, 225). Safety manuals were overly complex and unreflective of
the working environment, and did not provide adequate guidance for safety-critical
tasks (e.g. negative-pressure-test). Third, systems engineering was not optimal in a
number of areas, in particular the design of human-computer interfaces for monitor-
ing equipment, and automated emergency response systems.
Journal of Risk Research 419

Organisational factors summary


Since the Piper Alpha disaster, when operating companies had little incentive to
reduce risks unless they were explicitly identified by the regulator (Paté-Cornell
1993), safety culture is constantly identified as underlying offshore safety mishaps.
The DH investigation highlights a range of instances where safety culture theory
can be used to account for the mishap. However, as we consider below, the
relationships between safety culture and organisational accidents remains
complicated, contested, and difficult to shape (DeJoy 2005).

The Deepwater Horizon explosion: a complex mishap framework


The National Oil Spill Commission (2011) investigation implicitly adopts a classical
root-cause analysis to understand the DH accident. This shows how a multitude of
‘latent’ (well design, safety culture) and ‘active’ (operator decisions) cascaded to
cause the incident. The report highlights numerous factors as having a role in the
Downloaded by [LSE Library] at 04:00 06 January 2014

incident, yet the relationship and interdependencies between these factors is not
fully explored. Furthermore, the behavioural aspects of the incident are not placed
in a psychological framework that facilitates interventions. Through applying NTS
and safety culture theory to interpret key events, we identify recurring behavioural
and organisational issues across the offshore system that contributed to the mishap.
These are similar to the psychological issues identified in previous investigations of
safety within the offshore industry, and point to specific (e.g. to reduce the likeli-
hood of future blowouts) and general (e.g. team training, tactical decision games)
interventions for improving non-technical skills and safety culture.
Yet, as indicated in the preceding discussion, whilst the analysis of the DH
mishap does highlight a variety of disparate activities and conditions leading to the
event, it does not connect them together. This is important, as a recurring critique
of the incident analysis literature is the extent to which they result in interventions
that improve safety within dynamic and changing industries. This would certainly
appear to be the case in the offshore sector. Reliability analysis of offshore systems
tend to be effective at assessing risks associated with engineering, but not-social
systems (Bea 2002). Rather than addressing ‘eureka’ moments, a culmination of
psychological and technical problems require attention (Dekker, Cilliers, and Hof-
meyr 2011). Interventions to reduce risk tend to focus on individual components of
an incident rather than the various accumulated disruptions in the organisational
system that created the possibility for mishap occurrence (Turner and Pigeon 1997).
Alternatively, they focus on generic concepts such as safety culture, which aim to
shape the overall safety environment and thus reduce the likelihood of accidents.
The DH investigation highlights a range of areas for safety interventions to be
developed and applied. Research on NTS and safety culture has resulted in
numerous tools and techniques appropriate for offshore environments: for example,
systems for monitoring and sharing incident data, culture change tools, safety
diagnostics, behavioural observation and training, predictive risk models etc. (Flin,
O’Connor, and Crichton 2008; Kujath, Amyotte, and Khan 2010). However, to
optimise the effectiveness of such interventions, it is necessary to systematically
consider how the components underlying the DH incident interacted together to
escalate risk. Focussing on individual activities or environmental features identified
as causing the event may not be effective. This is because ameliorating one threat
420 T.W. Reader and P. O’Connor

to system safety offshore (e.g. changing procedures for the NPT) only addresses a
specific scenario or problem, and there are multiple pathways through which error
or problems in managing hazards can result in a mishap (Anderson 1999).
Whilst the specific failures leading to an incident might be addressed (e.g.
blowout preventer technology), deeper underlying problems (e.g. regulation, safety
culture) that underlie the failures remain and can combine to create similar events
in different areas of the offshore system. Furthermore, behaviours that appear
erroneous are often understood with hindsight, despite decision-makers being unable
to assess (e.g. due to information limitations) the consequences of their judgements
in real-time (Cook and Nemeth 2010). In utilising the DH analysis to shape safety
management across the offshore industry, it is necessary to begin charting and
explaining the relationships between the various components within the offshore
system that contributed or were involved in the mishap. Understanding activity in
the context of the organisational system is essential for developing a more
integrated accident analysis.
Downloaded by [LSE Library] at 04:00 06 January 2014

Using the NTS and safety culture analysis above, Figure 1 captures the interac-
tions between the various events identified in the DH investigation as leading to the
mishap. The model takes inspiration from Rasmussen’s (1997) description of the
inter-connected systems and complex interactions leading to the Zeebrugge ferry
accident. It utilises Leveson’s (2011) focus on the hierarchical relationships
between: (i) the specific events/error leading to an accident; (ii) the environmental
conditions that allowed the mishap to occur; and (iii) the underlying system factors
shaping the organisation and work conducted within it. In Leveson’s (2012)
Systems-Theoretic Accident Model and Processes (STAMP) approach, safety is
viewed as a control problem. Traditionally, accident models explain accident
causation in terms of a series of events. However, the STAMP approach considers
accidents to occur as a result of a lack of constraints imposed on the system design
and during operational deployment. In this approach, accidents in complex systems
are not considered to occur due to independent component failures. Rather they
result from external disturbances or dysfunctional interactions amongst system
components that are not adequately handled by the control system (see Leveson
2012 for more detail).
In the model presented, time is not explicitly captured. Rather, systemic aspects
of the accident process such as regulation, safety culture, communication, use of
third-parties and human factors engineering are conceptualised as factors that
implicitly influenced the risk environment over time (e.g. regulation). Also, the
model is limited to the analysis of factors leading to the DH mishap described in
the official National Oil Spill Commission (2011) report, and undoubtedly, there are
factors yet to be included.
Figure 1 finds no single root-cause or chain of behaviours to have caused the
accident. Although the failure of the blowout preventer is often seen as the core
failure underlying the oil spill (and remains the focus of technical and non-technical
research on safety interventions), a combination of interlinked events that occurred
simultaneously underlie the incident. These led to the earlier described non-techni-
cal skills and safety culture problems. Many of the accident mechanisms (e.g. the
unsuccessful cement job) leading to the blowout emerged from separate and distinct
failures (e.g. flaws in the cement design, inappropriate foam cement slurry). It is
not clear whether ameliorating these individually would have prevented the accident
mechanisms from occurring, or whether the deeper-lying conditions (flaws in the
Downloaded by [LSE Library] at 04:00 06 January 2014

Deepwater Horizon
destruction and oil spill

Event/accident 1. Failure of 2. Hydrocarbons enter well 3. Hydrocarbons enter 4. Blowout Prevent


Cement Barrier and travel up the riser /ignite on rig floor (BOP) failure
mechanisms

Unsuccessful Senior staff not Drilling crew fail Emergency Faulty


cement job questioning to notice ‘kick’ procedures not batteries
Fire
Incorrectly performed cement job/NPT implemented
damage
negative pressure test (NPT)

Flaws in temporary NPT team Lack of


Inappropriate
abandonment errors/ automated fire
foam cement Lack of
procedure design confirmation and detection
slurry awareness of Crew attention Instrumentation and
bias problems in system
spilt between display problems
NPT tasks for monitoring the
Lack of well
Flaws in design Error data information on Inadequate
of cementing not cement job maintenance
process evaluated Poor communication
Lack of training for between NPT team
performing the and crew members
Conditions NPT on drill floor

Flaws in design
of well Rejection of
cement Lack of clear
evaluation log procedures for
Poor info sharing running NPT
between
Minimal
operator and
communication
contract
on operational
companies
decisions
Informal risk Not learning
assessment from previous
procedures incidents

System Factors Production/cost Third party companies Industry Communication culture between Human
-saving conducting safety standards/ operational, management, and factors
pressure critical work regulation contract staff engineering

Events directly preceding mishap


Journal of Risk Research

Conditions that allowed mishap to occur


System factors underlying mishap
421

Figure 1. Interactions between events leading to the Deepwater Horizon mishap, the conditions that allowed the mishap to occur, and the system
factors underlying the mishap (as described within the National Oil Spill Commission Report (2011)).
422 T.W. Reader and P. O’Connor

design of the well, poor information sharing between contract and operating
companies) would simply have created alternative accident mechanisms (or
postponed the mishap to a later point in time).
Furthermore, the conditions leading to the DH mishap can be seen as manifesta-
tions of systemic factors or “system migration to states of higher risk” (Leveson
2011, 60). For example, the lack of industry regulation for running safety critical
processes created risks across the offshore system, including negating the need for
formal risk assessments in the design of the well, training for managing the NPT
and emergency scenarios, and maintenance and inspection routines. It also
negatively influenced other system factors, such as safety culture and the
requirement for effective human factors engineering.
Other system factors, such as the culture of communication, shaped the work
environment through more implicit mechanisms (e.g. information sharing between
contract and operator staff, management listening to the concerns of junior staff)
and heightened the risks associated with safety critical tasks leading to the mishap.
Downloaded by [LSE Library] at 04:00 06 January 2014

For example, in terms of the decision-making and risk assessment of operators, the
analysis shows how ‘errors’ were shaped. In particular, the drilling crews monitor-
ing the progress of the well were unaware of the problems with the NPT, and their
attention was split across different tasks. This represents human factors engineering
(a systemic problem) and also that their decision-making was constrained through a
lack of communication (and thus awareness) on the doubts surrounding the
problems with NPT. In turn, those performing the NPT did not have findings from
the cement evaluation log available and were unaware of the problems surrounding
the cement slurry (known by Halliburton, but not communicated to BP) and thus
the need to take a more conservative approach to assessing well integrity.
In summary, at the level of event/accident mechanism, the DH mishap has a
relatively traditional accident causation pathway. Active (NTS) and latent factors
(safety culture) combined together to contribute to the incident, and it can be
explained from a traditional linear accident analysis perspective. However, closer
inspection of incident using a systems-thinking perspective reveals a range of com-
plicated interdependent systems, with the accident potentially occurring through a
multitude of pathways. As offshore systems become increasingly complex, and the
nature of work itself more challenging, it becomes necessary to better understand
the inter-linking components that underlie the incident, and may combine again in
future to cause different mishaps with similar causes.

Conclusions
A range of operational behaviours and underlying safety management problems are
identified as causing the DH disaster. We apply NTS and safety culture concepts to
interpret these factors, as they facilitate the design of interventions through
identifying common (yet, highly specific) patterns of cognition, social behaviour
and organisational management underlying mishaps. Yet, this would appear
insufficient for applying the lessons from the DH to prevent future offshore
mishaps. Specifically, for interventions to be effective, a better understanding is
required of the interactions and relationships between the various events, conditions,
and system factors that combined to produce the incident. We begin the
development of a systems-model delineating the various pathways through which
underlying system factors created organisational risk and compromised the activity
Journal of Risk Research 423

of operational staff on the DH in the lead-up to the blowout. The principles of this
model have applications beyond the offshore sector, and it attempts to capture the
migration of risk across complex organisational systems.

References
Anderson, P. 1999. “Complexity Theory and Organization Science.” Organization Science
10: 216–232.
Bea, R. 2002. “Human and Organizational Factors in Reliability Assessment and
Management of Offshore Structures.” Risk Analysis 22: 29–45.
BP. 2010. Deepwater Horizon: Accident Investigation Report. Houston, TX: BP.
Cook, R., and C. Nemeth. 2010. “‘Those Found Responsible Have Been Sacked’: Some
Observations on the Usefulness of Error.” Cognition, Technology & Work 12: 87–93.
Cox, S., and A. Cheyne. 2000. “Assessing Safety Culture in Offshore Environments.” Safety
Science 34: 111–129.
Crichton, M. 2009. “Improving Team Effectiveness Using Tactical Decision Games.” Safety
Science 47: 330–336.
Downloaded by [LSE Library] at 04:00 06 January 2014

Cullen. 1990. Report of the Official Inquiry into the Piper Alpha Disaster. London: HMSO.
DeJoy, D. 2005. “Behavior Change versus Culture Change: Divergent Approaches to
Managing Workplace Safety.” Safety Science 43: 105–129.
Dekker, S., P. Cilliers, and J. Hofmeyr. 2011. “The Complexity of Failure: Implications of
Complexity Theory for Safety Investigations.” Safety Science 49: 939–945.
Endsley, M. 1995. “Towards a Theory of Situation Awareness in Dynamic Systems.” Human
Factors 37: 32–64.
Faber, M., and M. Stewart. 2003. “Risk Assessment for Civil Engineering Facilities: Critical
Overview and Discussion.” Reliability Engineering & System Safety 80: 173–184.
Finucane, M., A. Alhakami, P. Slovic, and S. Johnson. 2000. “The Affect Heuristic in
Judgements of Risks and Benifits.” Journal of Behavioural Decision Making 13: 1–17.
Flin, R. 1995. “Crew Resource Management for Training Teams in the Offshore Oil
Industry.” European Journal of Industrial Training 9: 23–27.
Flin, R., K. Mearns, P. O’Connor, and R. Bryden. 2000. “Safety Climate: Identifying the
Common Features.” Safety Science 34: 177–192.
Flin, R., P. O’Connor, and M. Crichton. 2008. Safety at the Sharp End: A Guide to
Non-technical Skills. Aldershot: Ashgate.
Gordon, R., R. Flin, and K. Mearns. 2005. “Designing and Evaluating a Human Factors
Investigation Tool (HFIT) for Accident Analysis.” Safety Science 43: 147–171.
Guldenmund, F. 2000. “The Nature of Safety Culture: A Review of Theory and Research.”
Safety Science 34: 215–257.
Hopkins, A. 2011. “Management Walk-arounds: Lessons from the Gulf of Mexico Oil Well
Blowout.” Safety Science 49: 1421–1425.
Hopkins, A. 2012. Disastrous Decisions: The Human and Organisational Causes of the Gulf
of Mexico Blowout. Sydney: CCH Australia.
Katsakiori, P., G. Sakellaropoulos, and E. l. Manatakis. 2009. “Towards an Evaluation of
Accident Investigation Methods in Terms of Their Alignment with Accident Causation
Models.” Safety Science 47: 1007–1015.
Kirwan, B. 2001. “Coping with Accelerating Socio-technical Systems.” Safety Science 37:
77–107.
Kujath, M., P. Amyotte, and F. Khan. 2010. “A Conceptual Offshore Oil and Gas Process
Accident Model.” Journal of Loss Prevention in the Process Industries 23: 323–330.
Lawton, R., and D. Parker. 1998. “Individual Differences in Accident Liability: A Review
and Integrative Approach.” Human Factors 40: 655–671.
Leveson, N. 2011. “Applying Systems Thinking to Analyze and Learn from Events.” Safety
Science 49: 55–64.
Leveson, N. 2012. Engineering a Safer World: Systems Thinking Applied to Safety. Boston,
MA: MIT Press.
Mack, A. 2003. “Inattentional Blindness: Looking Without Seeing.” Current Directions in
Psychological Science 12: 179–184.
424 T.W. Reader and P. O’Connor

Mearns, K., R. Flin, and P. O’Connor. 2001. “Sharing ‘Worlds of Risk’: Improving Commu-
nication with Crew Resource Management.” Journal of Risk Research 4: 377–392.
Mearns, K., B. Kirwan, T. W. Reader, J. Jackson, R. Kennedy, and R. Gordon. 2013.
“Development of a Methodology for Understanding and Enhancing Safety Culture in Air
Traffic Management.” Safety Science 53: 123–133.
Mearns, K., S. Whitaker, and R. Flin. 2001. “Benchmarking Safety Climate in Hazardous Envi-
ronments: A Longitudinal, Inter-organisational Approach.” Risk Analysis 21: 771–786.
Mearns, K., S. Whitaker, and R. Flin. 2003. “Safety Climate, Safety Management Practice
and Safety Performance in Offshore Environments.” Safety Science 41: 641–680.
National Oil Spill Commission. 2011. Deep Water: The Gulf Oil Disaster and the Future of
Offshore Drilling. Washington, DC: National Commission on the BP Deepwater Horizon
Oil Spill and Offshore Drilling.
O’Connor, P., and R. Flin. 2003. “Crew Resource Management Training for Offshore Oil
Production Teams.” Safety Science 33: 111–129.
O’Hare, D. 2000. “The ‘Wheel of Misfortune’: A Taxonomic Approach to Human Factors in
Accident Investigation and Analysis in Aviation and Other Complex Systems.”
Ergonomics 43: 2001–2019.
Okstad, E., E. Jersin, and R. K. Tinmannsvik. 2012. “Accident Investigation in the
Downloaded by [LSE Library] at 04:00 06 January 2014

Norwegian Petroleum Industry – Common Features and Future Challenges.” Safety


Science 50: 1408–1414.
Paté-Cornell, M. E. 1993. “Learning from the Piper Alpha Accident: A Postmortem Analysis
of Technical and Organizational Factors.” Risk Analysis 13: 215–232.
Perrow, C. 1999. Normal Accidents: Living with High-risk Technologies. Princeton, NJ:
Princeton University Press.
Pidgeon, N. 1991. “Safety Culture and Risk Management in Organizations.” Cross-Cultural
Psychology 22: 129–140.
Rasmussen, J. 1997. “Risk Management in a Dynamic Society: A Modelling Problem.”
Safety Science 27: 183–213.
Reader, T., R. Flin, K. Mearns, and B. Cuthbertson. 2011. “Team Situation Awareness and
the Anticipation of Patient Progress during ICU Rounds.” BMJ Quality & Safety 20:
1035–1042.
Reason, J. 1997. Managing the Risks of Organisational Accidents. Aldershot: Ashgate.
Rundmo, T. 1996. “Associations Between Risk Perception and Safety.” Safety Science 24:
197–209.
Simon, H. A. 1991. “Bounded Rationality and Organizational Learning.” Organization
Science 2: 125–134.
Skogdalen, J., J. Khorsandi, and J. Vinnem. 2012. “Evacuation, Escape, and Rescue
Experiences from Offshore Accidents including the Deepwater Horizon.” Journal of Loss
Prevention in the Process Industries 25: 148–158.
Skogdalen, J., I. Utne, and J. Vinnem. 2011. “Developing Safety Indicators for Preventing
Offshore Oil and Gas Deepwater Drilling Blowouts.” Safety Science 49: 1187–1199.
Slovic, P. 1979. “Rating the Risks.” Environment 21: 36–39.
Sneddon, A., K. Mearns, and R. Flin. 2006. “Safety and Situation Awareness in Offshore
Crews.” Cognition, Technology & Work 8: 255–267.
Steinzor, R. 2011. “Lessons from the North Sea: Should ‘Safety Cases’ Come to America?”
Boston College Environmental Affairs Law Review 38: 417–444.
Stout, R., E. Salas, and J. Fowlkes. 1997. “Enhancing Teamwork in Complex Environments
through Team Training.” Group Dynamics: Theory, Research and Practice 1: 169–182.
Taylor, S. 1979. “Hospital Patient Behavior: Reactance, Helplessness, or Control?” Journal
of Social Issues 35: 156–184.
Turner, B., and N. Pigeon. 1997. Man-made Disasters. 2nd ed. London: Butterworth-
Heinemann.
Wagenaar, W. 1992. “Risk Taking and Accident Causation.” In Risk Taking Behaviour, edited
by J. Yates. Chichester: Wiley.
Woodcock, B., and K. Toy. 2011. “Improving Situational Awareness through the Design of
Offshore Installations.” Journal of Loss Prevention in the Process Industries 24: 847–851.

View publication stats

You might also like