Professional Documents
Culture Documents
net/publication/272591459
CITATIONS READS
27 1,356
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Tom W. Reader on 30 June 2015.
To cite this article: Tom W. Reader & Paul O’Connor (2014) The Deepwater Horizon explosion: non-
technical skills, safety culture, and system complexity, Journal of Risk Research, 17:3, 405-424,
DOI: 10.1080/13669877.2013.815652
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Journal of Risk Research, 2014
Vol. 17, No. 3, 405–424, http://dx.doi.org/10.1080/13669877.2013.815652
Introduction
Despite extensive research investigating safety within the oil and gas sector, serious
mishaps still occur. The 2010 explosion and sinking of the Deepwater Horizon
(DH) semi-submersible drilling rig in the Gulf of Mexico resulted in 11 fatalities,
and the world’s worst offshore oil spill. As the exploration and production of hydro-
carbons moves to deeper, remoter, and more ecologically sensitive settings, learning
from incidents becomes even more essential. To date, analyses of the DH mishap
have tended to focus on technical decision-making (BP 2010), the detection of
blowouts (Skogdalen, Utne, and Vinnem 2011), and safety regulation (Steinzor
2011). The investigation by the National Oil Spill Commission (2011) highlights
behaviours and organisational factors underlying the incident. However, these are
yet to be considered systematically within a human factors framework; this is
necessary for developing appropriate interventions and refining theory on accident
causation. In this paper, we consider the DH mishap from a non-technical skills
(NTS) and safety culture perspective. Then, through applying a systems-thinking
perspective, we reflect on the role of the human factors-related problems underlying
the incident.
Mishap data often provides the catalyst for safety interventions; however, their
actual contribution to safety improvement can be limited. For example, data from
accident investigations often identify multiple contradictory contributory factors
linked to a mishap (Katsakiori, Sakellaropoulos, and Manatakis 2009), they inter-
pret behaviours out of context (Rasmussen 1997), lessons can quickly become
obsolete (Kirwan 2001), obvious problems are easier to identify than latent threats
Downloaded by [LSE Library] at 04:00 06 January 2014
(O’Hare 2000), and the influencers of behaviour are often poorly described (Law-
ton and Parker 1998). Analyses frequently identify causes that are not observable
in real-time, and ‘risky’ behaviours in one context can prevent mishaps in another
(Perrow 1999). A recurring critique of accident analysis techniques is their focus
on the individual components of an incident (i.e. the actions preceding a mishap),
rather than their interdependencies or systemic causes (Leveson 2011). System-
thinking approaches attempt to explain the accident process, and to understand
how the beliefs and actions of an operator are dependent on factors outside their
control or awareness. To be effective, interventions to reduce future events must
consider these interdependencies (Rasmussen 1997).
The National Oil Spill Commission’s (2011) investigation of the DH provides
rich data on why the mishap occurred. Specific instances of behaviour and
organisational management are described. Yet it does not frame these using relevant
psychology literatures, or fully consider system interdependencies. This is necessary
for (i) describing the underlying psychological dimensions (rather than specific
instances) of worker behaviour and organisational environment that led to the
incident, (ii) explaining how risky-activity occurred, and (iii) developing interven-
tions to reduce the likelihood of error or future mishaps. DH is a seminal event for
safety management in the offshore industry, and we apply psychology theory to
understand the behaviour and organisational environment underlying the incident.
We consider the incident using two of the leading approaches used to understand
safety-related activity in high-risk workplaces, and then adopt a systems thinking
perspective to integrate our findings.
The first, NTS, refers to the cognitive (decision-making, risk assessment,
situation awareness (SA)) and social (teamwork, leadership) skills that underpin safe
performance in high-risk workplaces (Flin, O’Connor, and Crichton 2008).
Psychology research demonstrates the importance of worker NTS for managing
safety emergencies offshore, although offshore research in the area remains quite
limited (O’Connor and Flin 2003). NTS research analyses the non-technical skill
requirements of a workplace through investigating accident data, and triangulates
this analysis through other data collection techniques such as observations, surveys
and interviews (Flin, O’Connor, and Crichton 2008). Based upon this analysis,
training, assessment, and other safety intervention programmes are tailored to the
specific NTS needs of a work domain.
Journal of Risk Research 407
final report of the National Oil Spill Commission’s (2011) report as the primary
source of information. The review was carried out by two industrial/organisational
psychologists with experience carrying out safety research across a range of high-
risk industries. First, the events that lead to the mishap were identified from the
National Oil Spill Commission’s (2011) report. The researchers primarily focussed
on decision-making events highlighted within the report as contributing to the inci-
dent (see Table 2) and then explored the behaviours and context surrounding these
events. Through this analysis, the key stages that led to the mishap were identified,
and the interactions between the events were delineated. Second, the events (and
the interactions between them) were then considered and interpreted within the
context of the nontechnical skills and safety culture literatures (with specific
reference, where possible, to offshore safety research or other academic articles on
the DH).
Annular space The space between the well casing and the surrounding rock
formation
Blowout preventer A large valve used to control and severe the link between the oil
well and the platform
Cement plug A piece of cement used to seal the well below the seabed.
Cement slurry (with Cement with tiny bubbles of nitrogen injected into it in order to
nitrogen) reduce the weight placed upon the well. Unstable nitrogen foam
slurry can become porous, with nitrogen breaking out of the cement
Cementing process Cement is pumped down the well and up the annular space so that
it seals the production casing to the surrounding rock formation
Channelling Leaving gaps in the annular space (during the placement of the
production casing) through which hydrocarbons can flow up
Centralisers Components placed between sections of the production casing to
ensure optimal positioning of the casing in the well, and even
displacement of drilling mud from the wellbore
Downloaded by [LSE Library] at 04:00 06 January 2014
Drilling mud Fluid used to aid the drilling of boreholes into the earth (e.g.
keeping the drill bit cool, transporting rock cuttings to the surface)
Drill string Hollow sections of drill pipe consisting of drill collars, tools and
bits that are used to drill oil wells
Float valves Valves on the production casing that ensure cement and mud can be
pumped into the well without reversing direction and flowing back
up the well
Hydrocarbon zone Reservoir of hydrocarbons being extracted
(Well) kick Flow of hydrocarbons and/or water into the wellbore and up the
annular space (signalling a blowout is going to occur)
Lockdown sleeve A device used to lock in place the wellhead and production casing
Lost return/circulation This occurs when drilling fluids leaks into the surrounding
geological formations (instead of returning up the Annular space),
potentially increasing the fragility of the rock being drilled
Mud line The ocean floor where the wellhead is placed
Negative pressure test Used to assess whether, under conditions of low pressure,
hydrocarbons leak into the well
Production casing A continuous wall of steel placed between the hydrocarbon zone at
the bottom of the well, and the wellhead on the seafloor
Riser The link cable between the well and the drilling rig
Shoe track barriers Barrier for blocking hydrocarbons from entering the bottom of the
pipe
Spacer Chemicals used to separate drilling mud from cement slurry
Temporary The well is closed off to allow the drilling rig to be moved on, and
abandonment a smaller production unit to begin hydrocarbon extraction
Table 1. Key stages leading to the explosion on the Deepwater Horizon (National Oil Spill
Commission Report (2011).
Report
Key stages Causes page nos.
1. The cement barrier used to isolate Errors in conducting and interpreting 101–107
the hydrocarbon zone at the bottom the negative-pressure test, creating the
of the well from the annular space belief that the cement job had been
failed successful
Errors in the design of the cementing
process
The use of an inappropriate foam
cement slurry to seal the well
Design of the temporary abandonment
which resulted in overly high-levels of
pressure being placed on the cement
job
Downloaded by [LSE Library] at 04:00 06 January 2014
Cognitive/individual factors
Research to understand mishaps in complex organisational setting often focus on
the ‘cognitions’ (e.g. decision-making, risk assessment, SA) of the operators most
closely associated with a mishap. The DH investigation highlights a variety of
situations where problems in decision-making, risk assessment, and SA contributed
to the mishap.
Decision-making
The National Oil Spill Commission (2011) report highlights a series of operational
and managerial decisions which (either directly or indirectly) contributed to the
mishap. Many of the decisions identified refer to the design and management of the
Macondo well, for example the depth of the well or the cement casing that was to
be used (Hopkins 2012). Decisions involved participants distributed across locations
and various companies, and Table 2 lists key decisions identified as problematic.
Downloaded by [LSE Library] at 04:00 06 January 2014
Table 2. Key organisational decisions in the immediate lead up to the Deepwater Horizon mishap (National Oil Spill Commission 2011).
410
Date (all 2010) Primary decision-makers Decision Factors influencing decision-making Report page no.
February–April Halliburton cement Omission to investigate Saved time and costs, however the possibility that 101–102
design and BP team or consider negative the nitrogen foam slurry would be porous was not
results on the stability investigated. The BP team were not aware of the
of the foam cement potential problems with the cement slurry
slurry
9th April BP engineering team Cessation of drilling At 18,193 feet cracks started to appear in the well 93–94
activities at 18,360 feet formation. To avoid further damage, fractures were
plugged, and drilling to intended depth of 20,200 feet
was stopped
11th–15th April BP engineering team Placing and cementing The exploratory Macondo well had drilled into an 94
of a final production accessible hydrocarbon reservoir of at least 50 million
casing string to recover barrels, although the rock formations at the bottom of
T.W. Reader and P. O’Connor
Table 2. (Continued).
Date (all 2010) Primary decision-makers Decision Factors influencing decision-making Report page no.
19th April BP engineering team Omission to investigate The BP and Transocean teams concluded that the 98, 102, 116
why four-times the pressure gauge being relied upon was broken, and
normal amount of further investigation of anomalous readings would
pressure was required to have delayed the cement job. However it was not
convert the float valves categorically known whether the float valves had
(for unidirectional converted
pumping of cement and
mud), or why circulating
pressure after conversion
was lower than expected
19th April BP engineering team Limiting the circulation Circulation of 350 barrels of mud reduced the 100
of drilling mud to 350 likelihood of a lost-return event and damage to the
barrels rather than 2760 well. However, the lower volume of mud meant
barrels technicians could not fully circulate and examine
mud from the bottom of the well for hydrocarbons
19th April BP engineering team Specifying a low flow This limited pressure on the well formation, but it 100
rate for pumping cement also reduced the efficiency of mud displacement from
into the well the annular space
19th April BP engineering team Limiting the volume of This reduced pressure on the geologic formation in 100
cement to be pumped order to protect the viability of the well. However,
down the well to extend this deviated from internal guidelines specifying that
500 feet above the cement should be pumped to 1000 feet above the
hydrocarbon zone hydrocarbon zone, and it increased the likelihood of
error in cement placement
19th April BP engineering team Use of a lighter nitrogen- Pressure on the geologic formation was reduced due 100
based cement formula to the cement being lighter, however BP had very
limited expertise in using this formula in the Gulf of
Mexico
Journal of Risk Research
(Continued)
411
Downloaded by [LSE Library] at 04:00 06 January 2014
Table 2. (Continued).
412
Date (all 2010) Primary decision-makers Decision Factors influencing decision-making Report page no.
20th April BP engineering team Rejection of the full suite Saved time and money, however it meant the 102
of tests for evaluating evaluation of the cement job was reliant on a far
cement job more limited set of data
20th April BP engineering team, Setting the cement plug Reduced the likelihood of damage occurring to the 103–104
with MMS approval 3300 feet below the mud lockdown sleeve, but placed greater pressure on the
line cement job at the bottom of the well
20th April BP engineering team Replacing 3000 feet of Use of seawater avoided mud contaminating the 104
mud in the well with cement plug whilst it set. However the lighter
lighter seawater for seawater placed greater pressure on the cement job at
setting the cement plug the bottom of the well
20th April BP engineering team Not installing additional Saved time and costs, but meant that that the cement 104
physical barriers to stop job at the bottom of the well was the only physical
T.W. Reader and P. O’Connor
SA (risk perception)
Judging risk is a key aspect of SA as it reflects how operators interpret information,
think ahead, and make decisions. The DH investigation highlights SA problems in
the real-time management of risk. Specifically, the investigation critiques the lack of
risk assessment on the DH, with the platform team not accurately assessing risks.
Risk research shows that problems in assessing risk are central to poor decision-mak-
ing prior to organisational mishaps (Wagenaar 1992) and that numerous factors influ-
ence how we consider and respond to risk. For example, subjectivity (how a risk is
framed), control (whether we have control over a risk), and expertise for understand-
ing risk are important (Finucane et al. 2000; Mack 2003; Slovic 1979). Formalised
methods of risk assessment are used to reduce biases in risk perception (Pidgeon
1991), and offshore research shows that risk perception is influenced by factors such
as safety climate, previous experiences of harm, work conditions, and management
commitment to safety (Mearns, Whitaker, and Flin 2003; Rundmo 1996).
Decision-making on the DH was often influenced by misperceptions of risk. For
example, the salience of causing long-term damage to the viability of the well (i.e.
production potential) outweighed consideration of short-term risks. The selection of
a ‘long-string’ production casing (used to secure the borehole and lower the drill
string through) by BP engineers to cement the well (National Oil Spill Commission
2011, 115, 123) is found to have been made to reduce the likelihood of future lost
returns. This was despite initial computer modelling showing a long-string casing
would increase the difficulty of the immediate cement job. Similarly, the decision
during the temporary abandonment phase to pump cement into the well at a low
rate (National Oil Spill Commission 2011, 116) was made to reduce lost returns.
Yet it created short-term risk through reducing the efficiency of mud displacement
from the well. Also, the use of foam cement slurry was taken to preserve the
long-term viability of the well (as it was light and less likely to cause damage), yet
it increased risk through creating great reliance on external expertise.
Risk assessment research shows the importance of structured risk analysis in
uncertain decision environments (Faber and Stewart 2003), and the National Oil
414 T.W. Reader and P. O’Connor
Spill Commission (2011) report highlights the informal nature of risks appraisal and
decision-making. An example is the decision by the BP engineering team, on
learning that only six out of 16 centralisers were available, to not perform a formal
risk assessments of using six centralisers. This decision was made despite it being
contrary to well design specifications, initial computer simulations, and staff
concerns. Similarly, the decision to set the cement plugs in seawater (due to fears
over mud entering the cement plug) rather than heavy drilling mud is critiqued, as
there was no precedence for performing this operation at such depth. Although the
decision was made to minimise risk, little formal analysis of potential new risks
was made (e.g. through the stress placed on the cement formation). Poor risk
assessment is seen as emerging from a drive to save time and resources and also
poor regulation and is judged central to the mishap. Yet, as discussed below, risk
assessments were also shaped by a range of other factors.
Downloaded by [LSE Library] at 04:00 06 January 2014
SA (process)
The investigation into the DH describes a number of instances where inaccurate SA
of the technical process of establishing the well contributed to the mishap. For
example, due to a managerial decision not to run a cement evaluation log (tests
used to assess the integrity of the cement job), assessments on the success of the
cement job were based on a limited set of information (National Oil Spill Commis-
sion 2011, 117). Evaluation of the cement job was based on an awareness of
whether fluid was flowing back up the well, and indicators of problems in the well
(e.g. during the negative pressure test (NPT)) were not acknowledged. The report
describes crew members as entering a form of confirmation bias (National Oil Spill
Commission 2011, 118–119). In particular, when the negative-pressure test team
were unable to retain a drill-pipe pressure of zero (indicating a successful negative-
pressure test), they instead focused and based their SA on information (pressure on
the kill line) that showed the negative-pressure test to be successful, ignoring
contradictory data.
Loss of SA is also specified as contributing to the blowout. Crew members on
the drill floor were not aware of dynamic and substantial changes in the status of
the Macondo well. ‘Kicks’ that signalled a blowout was imminent (National Oil
Spill Commission 2011, 120–121) were not detected, and over a 45-min period
various data (e.g. increases in drilling-pipe pressure, pressure differences between
the drill pipe and kill line) indicating an impending blowout were not acted upon.
The reasons for this appear to be the demands placed upon crew members on the
drill floor, whereby they were required to perform multiple tasks at once, resulting
in their attention being split. This was compounded by instrumentation and displays
used for monitoring the well not alerting the drilling crew that a kick had occurred,
and the detection of the blowout was only reliant on drilling crew awareness.
Social/team factors
NTS research has long shown the impact of teamwork upon decision-making and
risk assessment in high-risk domains (Stout, Salas, and Fowlkes 1997). These
activities are crucial for maintaining safety offshore (O’Connor and Flin 2003) and
Downloaded by [LSE Library] at 04:00 06 January 2014
Teamwork
The National Oil Spill Commission (2011) reports various instances of poor
communication and leadership between crew members on the rig (and with
operating companies). For example, the communication between rig and the
onshore support teams. Formal discussions were frequently not held for key opera-
tional decisions, and communication on well management was poor. The rationale
underlying decision-making were not shared or clearly documented, and risks were
not formally assessed by crew members (National Oil Spill Commission 2011,
120). Changes in decision-making were frequently made, yet BP well site leaders
and rig crew had minimal time to become familiar with these changes due to poor
communication. Critically for the blowout, communication between operational
decision-makers on the drilling rig was also problematic (National Oil Spill
Commission, 2011, 123). Frequently, operational staff were unaware of emerging
risks and difficulties, for example the negative-pressure test. This shaped their
understanding of the work environment, and meant decisions occurred with a
‘bounded awareness’ of the relevant information relating to potential risk (Simon
1991), leaving operators vulnerable to a mishap.
Furthermore, poor inter-organisational communication between Halliburton and
BP is also cited as key factor in the mishap (National Oil Spill Commission 2011,
117, 123–124). In February 2010, Halliburton specialists recognised the potential
for the cement slurry to be unstable. Yet no investigation was made into the slurry
design, and BP was not informed. A further test in April (prior to the scheduled
cement job) repeated this result, and Halliburton failed to communicate to BP the
possibility that the foam cement slurry was potentially unstable. Thus, decisions on
the cement job were made on an incorrect assumption of cement slurry design
stability. This again speaks to the shared nature of risk assessment, and a lack of
clear procedures for outlining communication between the two organisations. More
pointedly, it highlights the importance of considering system inter-dependencies
416 T.W. Reader and P. O’Connor
Leadership
The National Oil Spill Commission (2011) also identifies problems in leadership for
technical and risk-related decision-making (e.g. communicating on the centralisers),
and overall management of the DH (National Oil Spill Commission 2011, 122).
Hopkins (2011), elaborates and considers safety leadership and communication on
the DH with respect to the leadership of four VIPs (two from BP and two from
Transocean, all experts on offshore drilling) visiting the DH rig on the day of the
mishap. The purpose of the visit was in part to reinforce safety. Despite the signs of
the well not having been sealed, and the potential for a well blowout, these were not
investigated or queried. In particular, Hopkins (2011) highlights the nature of the
flawed communication between the VIPs and the DH team as contributing to them
not identifying the problems that were occurring. First, the VIPs tended to focus on
Downloaded by [LSE Library] at 04:00 06 January 2014
‘slips, trips, and falls’ rather than focusing on other aspects of safety and risk assess-
ment. Second, the walkround was conducted in unobtrusive manner as possible in
order to not undermine crew. Third, safety questions were framed in a manner that
assumed things had gone successfully, limiting opportunities to discuss problems.
Furthermore, in considering the management of the accident, leadership again
appears important. For example, problems in command and control during the
evacuation sequence were critical (Skogdalen, Khorsandi, and Vinnem 2012).
The DH had a split chain of command between the Offshore Installation Manager
(OIM) and the vessel captain. Leadership depended on the status of the rig, whether it
was latched, underway, or in an emergency situation. Key and critical decisions relat-
ing to lifeboat launches had to be made, and although the responsibility for decision-
making lay with the Captain, crew members were unclear as to who was in charge
due to missing handover procedures. To some extent, the report finds leadership issues
(e.g. setting clear expectations, developing clear communication procedures, listening
to junior staff) as underlying the problems in communication, SA, and risk assessment
leading to the incident.
Regulation
As discussed above, the role of the regulator is critiqued by the DH investigation.
Effective regulation shapes offshore safety culture through creating expectations and
norms on safety management (Cox and Cheyne 2000; Taylor 1979). The MMS
lacked the staff, resources, technical expertise (e.g. growing awareness on the
increased likelihood of blowout preventer failures in Deepwater conditions (National
Oil Spill Commission 2011, 74)), decision-making autonomy, and political influence
to regulate safely. Senior officials focused on maximising ‘revenue from leasing and
production’. This impacted upon safety culture through the following mechanisms.
First, the quality of external inspections were often less rigorous than internal safety
audits, focussing on quantity rather than quality (National Oil Spill Commission
2011, 78). Inspectors did not ask ‘tough questions’ and avoided reaching conclusions
that would increase regulation or costs (National Oil Spill Commission 2011, 126).
Second, contingency planning for DH disaster scenarios was inadequate (National
Oil Spill Commission 2011, 84), and there were ‘no meaningful’ regulations for
Downloaded by [LSE Library] at 04:00 06 January 2014
incident, yet the relationship and interdependencies between these factors is not
fully explored. Furthermore, the behavioural aspects of the incident are not placed
in a psychological framework that facilitates interventions. Through applying NTS
and safety culture theory to interpret key events, we identify recurring behavioural
and organisational issues across the offshore system that contributed to the mishap.
These are similar to the psychological issues identified in previous investigations of
safety within the offshore industry, and point to specific (e.g. to reduce the likeli-
hood of future blowouts) and general (e.g. team training, tactical decision games)
interventions for improving non-technical skills and safety culture.
Yet, as indicated in the preceding discussion, whilst the analysis of the DH
mishap does highlight a variety of disparate activities and conditions leading to the
event, it does not connect them together. This is important, as a recurring critique
of the incident analysis literature is the extent to which they result in interventions
that improve safety within dynamic and changing industries. This would certainly
appear to be the case in the offshore sector. Reliability analysis of offshore systems
tend to be effective at assessing risks associated with engineering, but not-social
systems (Bea 2002). Rather than addressing ‘eureka’ moments, a culmination of
psychological and technical problems require attention (Dekker, Cilliers, and Hof-
meyr 2011). Interventions to reduce risk tend to focus on individual components of
an incident rather than the various accumulated disruptions in the organisational
system that created the possibility for mishap occurrence (Turner and Pigeon 1997).
Alternatively, they focus on generic concepts such as safety culture, which aim to
shape the overall safety environment and thus reduce the likelihood of accidents.
The DH investigation highlights a range of areas for safety interventions to be
developed and applied. Research on NTS and safety culture has resulted in
numerous tools and techniques appropriate for offshore environments: for example,
systems for monitoring and sharing incident data, culture change tools, safety
diagnostics, behavioural observation and training, predictive risk models etc. (Flin,
O’Connor, and Crichton 2008; Kujath, Amyotte, and Khan 2010). However, to
optimise the effectiveness of such interventions, it is necessary to systematically
consider how the components underlying the DH incident interacted together to
escalate risk. Focussing on individual activities or environmental features identified
as causing the event may not be effective. This is because ameliorating one threat
420 T.W. Reader and P. O’Connor
to system safety offshore (e.g. changing procedures for the NPT) only addresses a
specific scenario or problem, and there are multiple pathways through which error
or problems in managing hazards can result in a mishap (Anderson 1999).
Whilst the specific failures leading to an incident might be addressed (e.g.
blowout preventer technology), deeper underlying problems (e.g. regulation, safety
culture) that underlie the failures remain and can combine to create similar events
in different areas of the offshore system. Furthermore, behaviours that appear
erroneous are often understood with hindsight, despite decision-makers being unable
to assess (e.g. due to information limitations) the consequences of their judgements
in real-time (Cook and Nemeth 2010). In utilising the DH analysis to shape safety
management across the offshore industry, it is necessary to begin charting and
explaining the relationships between the various components within the offshore
system that contributed or were involved in the mishap. Understanding activity in
the context of the organisational system is essential for developing a more
integrated accident analysis.
Downloaded by [LSE Library] at 04:00 06 January 2014
Using the NTS and safety culture analysis above, Figure 1 captures the interac-
tions between the various events identified in the DH investigation as leading to the
mishap. The model takes inspiration from Rasmussen’s (1997) description of the
inter-connected systems and complex interactions leading to the Zeebrugge ferry
accident. It utilises Leveson’s (2011) focus on the hierarchical relationships
between: (i) the specific events/error leading to an accident; (ii) the environmental
conditions that allowed the mishap to occur; and (iii) the underlying system factors
shaping the organisation and work conducted within it. In Leveson’s (2012)
Systems-Theoretic Accident Model and Processes (STAMP) approach, safety is
viewed as a control problem. Traditionally, accident models explain accident
causation in terms of a series of events. However, the STAMP approach considers
accidents to occur as a result of a lack of constraints imposed on the system design
and during operational deployment. In this approach, accidents in complex systems
are not considered to occur due to independent component failures. Rather they
result from external disturbances or dysfunctional interactions amongst system
components that are not adequately handled by the control system (see Leveson
2012 for more detail).
In the model presented, time is not explicitly captured. Rather, systemic aspects
of the accident process such as regulation, safety culture, communication, use of
third-parties and human factors engineering are conceptualised as factors that
implicitly influenced the risk environment over time (e.g. regulation). Also, the
model is limited to the analysis of factors leading to the DH mishap described in
the official National Oil Spill Commission (2011) report, and undoubtedly, there are
factors yet to be included.
Figure 1 finds no single root-cause or chain of behaviours to have caused the
accident. Although the failure of the blowout preventer is often seen as the core
failure underlying the oil spill (and remains the focus of technical and non-technical
research on safety interventions), a combination of interlinked events that occurred
simultaneously underlie the incident. These led to the earlier described non-techni-
cal skills and safety culture problems. Many of the accident mechanisms (e.g. the
unsuccessful cement job) leading to the blowout emerged from separate and distinct
failures (e.g. flaws in the cement design, inappropriate foam cement slurry). It is
not clear whether ameliorating these individually would have prevented the accident
mechanisms from occurring, or whether the deeper-lying conditions (flaws in the
Downloaded by [LSE Library] at 04:00 06 January 2014
Deepwater Horizon
destruction and oil spill
Flaws in design
of well Rejection of
cement Lack of clear
evaluation log procedures for
Poor info sharing running NPT
between
Minimal
operator and
communication
contract
on operational
companies
decisions
Informal risk Not learning
assessment from previous
procedures incidents
System Factors Production/cost Third party companies Industry Communication culture between Human
-saving conducting safety standards/ operational, management, and factors
pressure critical work regulation contract staff engineering
Figure 1. Interactions between events leading to the Deepwater Horizon mishap, the conditions that allowed the mishap to occur, and the system
factors underlying the mishap (as described within the National Oil Spill Commission Report (2011)).
422 T.W. Reader and P. O’Connor
design of the well, poor information sharing between contract and operating
companies) would simply have created alternative accident mechanisms (or
postponed the mishap to a later point in time).
Furthermore, the conditions leading to the DH mishap can be seen as manifesta-
tions of systemic factors or “system migration to states of higher risk” (Leveson
2011, 60). For example, the lack of industry regulation for running safety critical
processes created risks across the offshore system, including negating the need for
formal risk assessments in the design of the well, training for managing the NPT
and emergency scenarios, and maintenance and inspection routines. It also
negatively influenced other system factors, such as safety culture and the
requirement for effective human factors engineering.
Other system factors, such as the culture of communication, shaped the work
environment through more implicit mechanisms (e.g. information sharing between
contract and operator staff, management listening to the concerns of junior staff)
and heightened the risks associated with safety critical tasks leading to the mishap.
Downloaded by [LSE Library] at 04:00 06 January 2014
For example, in terms of the decision-making and risk assessment of operators, the
analysis shows how ‘errors’ were shaped. In particular, the drilling crews monitor-
ing the progress of the well were unaware of the problems with the NPT, and their
attention was split across different tasks. This represents human factors engineering
(a systemic problem) and also that their decision-making was constrained through a
lack of communication (and thus awareness) on the doubts surrounding the
problems with NPT. In turn, those performing the NPT did not have findings from
the cement evaluation log available and were unaware of the problems surrounding
the cement slurry (known by Halliburton, but not communicated to BP) and thus
the need to take a more conservative approach to assessing well integrity.
In summary, at the level of event/accident mechanism, the DH mishap has a
relatively traditional accident causation pathway. Active (NTS) and latent factors
(safety culture) combined together to contribute to the incident, and it can be
explained from a traditional linear accident analysis perspective. However, closer
inspection of incident using a systems-thinking perspective reveals a range of com-
plicated interdependent systems, with the accident potentially occurring through a
multitude of pathways. As offshore systems become increasingly complex, and the
nature of work itself more challenging, it becomes necessary to better understand
the inter-linking components that underlie the incident, and may combine again in
future to cause different mishaps with similar causes.
Conclusions
A range of operational behaviours and underlying safety management problems are
identified as causing the DH disaster. We apply NTS and safety culture concepts to
interpret these factors, as they facilitate the design of interventions through
identifying common (yet, highly specific) patterns of cognition, social behaviour
and organisational management underlying mishaps. Yet, this would appear
insufficient for applying the lessons from the DH to prevent future offshore
mishaps. Specifically, for interventions to be effective, a better understanding is
required of the interactions and relationships between the various events, conditions,
and system factors that combined to produce the incident. We begin the
development of a systems-model delineating the various pathways through which
underlying system factors created organisational risk and compromised the activity
Journal of Risk Research 423
of operational staff on the DH in the lead-up to the blowout. The principles of this
model have applications beyond the offshore sector, and it attempts to capture the
migration of risk across complex organisational systems.
References
Anderson, P. 1999. “Complexity Theory and Organization Science.” Organization Science
10: 216–232.
Bea, R. 2002. “Human and Organizational Factors in Reliability Assessment and
Management of Offshore Structures.” Risk Analysis 22: 29–45.
BP. 2010. Deepwater Horizon: Accident Investigation Report. Houston, TX: BP.
Cook, R., and C. Nemeth. 2010. “‘Those Found Responsible Have Been Sacked’: Some
Observations on the Usefulness of Error.” Cognition, Technology & Work 12: 87–93.
Cox, S., and A. Cheyne. 2000. “Assessing Safety Culture in Offshore Environments.” Safety
Science 34: 111–129.
Crichton, M. 2009. “Improving Team Effectiveness Using Tactical Decision Games.” Safety
Science 47: 330–336.
Downloaded by [LSE Library] at 04:00 06 January 2014
Cullen. 1990. Report of the Official Inquiry into the Piper Alpha Disaster. London: HMSO.
DeJoy, D. 2005. “Behavior Change versus Culture Change: Divergent Approaches to
Managing Workplace Safety.” Safety Science 43: 105–129.
Dekker, S., P. Cilliers, and J. Hofmeyr. 2011. “The Complexity of Failure: Implications of
Complexity Theory for Safety Investigations.” Safety Science 49: 939–945.
Endsley, M. 1995. “Towards a Theory of Situation Awareness in Dynamic Systems.” Human
Factors 37: 32–64.
Faber, M., and M. Stewart. 2003. “Risk Assessment for Civil Engineering Facilities: Critical
Overview and Discussion.” Reliability Engineering & System Safety 80: 173–184.
Finucane, M., A. Alhakami, P. Slovic, and S. Johnson. 2000. “The Affect Heuristic in
Judgements of Risks and Benifits.” Journal of Behavioural Decision Making 13: 1–17.
Flin, R. 1995. “Crew Resource Management for Training Teams in the Offshore Oil
Industry.” European Journal of Industrial Training 9: 23–27.
Flin, R., K. Mearns, P. O’Connor, and R. Bryden. 2000. “Safety Climate: Identifying the
Common Features.” Safety Science 34: 177–192.
Flin, R., P. O’Connor, and M. Crichton. 2008. Safety at the Sharp End: A Guide to
Non-technical Skills. Aldershot: Ashgate.
Gordon, R., R. Flin, and K. Mearns. 2005. “Designing and Evaluating a Human Factors
Investigation Tool (HFIT) for Accident Analysis.” Safety Science 43: 147–171.
Guldenmund, F. 2000. “The Nature of Safety Culture: A Review of Theory and Research.”
Safety Science 34: 215–257.
Hopkins, A. 2011. “Management Walk-arounds: Lessons from the Gulf of Mexico Oil Well
Blowout.” Safety Science 49: 1421–1425.
Hopkins, A. 2012. Disastrous Decisions: The Human and Organisational Causes of the Gulf
of Mexico Blowout. Sydney: CCH Australia.
Katsakiori, P., G. Sakellaropoulos, and E. l. Manatakis. 2009. “Towards an Evaluation of
Accident Investigation Methods in Terms of Their Alignment with Accident Causation
Models.” Safety Science 47: 1007–1015.
Kirwan, B. 2001. “Coping with Accelerating Socio-technical Systems.” Safety Science 37:
77–107.
Kujath, M., P. Amyotte, and F. Khan. 2010. “A Conceptual Offshore Oil and Gas Process
Accident Model.” Journal of Loss Prevention in the Process Industries 23: 323–330.
Lawton, R., and D. Parker. 1998. “Individual Differences in Accident Liability: A Review
and Integrative Approach.” Human Factors 40: 655–671.
Leveson, N. 2011. “Applying Systems Thinking to Analyze and Learn from Events.” Safety
Science 49: 55–64.
Leveson, N. 2012. Engineering a Safer World: Systems Thinking Applied to Safety. Boston,
MA: MIT Press.
Mack, A. 2003. “Inattentional Blindness: Looking Without Seeing.” Current Directions in
Psychological Science 12: 179–184.
424 T.W. Reader and P. O’Connor
Mearns, K., R. Flin, and P. O’Connor. 2001. “Sharing ‘Worlds of Risk’: Improving Commu-
nication with Crew Resource Management.” Journal of Risk Research 4: 377–392.
Mearns, K., B. Kirwan, T. W. Reader, J. Jackson, R. Kennedy, and R. Gordon. 2013.
“Development of a Methodology for Understanding and Enhancing Safety Culture in Air
Traffic Management.” Safety Science 53: 123–133.
Mearns, K., S. Whitaker, and R. Flin. 2001. “Benchmarking Safety Climate in Hazardous Envi-
ronments: A Longitudinal, Inter-organisational Approach.” Risk Analysis 21: 771–786.
Mearns, K., S. Whitaker, and R. Flin. 2003. “Safety Climate, Safety Management Practice
and Safety Performance in Offshore Environments.” Safety Science 41: 641–680.
National Oil Spill Commission. 2011. Deep Water: The Gulf Oil Disaster and the Future of
Offshore Drilling. Washington, DC: National Commission on the BP Deepwater Horizon
Oil Spill and Offshore Drilling.
O’Connor, P., and R. Flin. 2003. “Crew Resource Management Training for Offshore Oil
Production Teams.” Safety Science 33: 111–129.
O’Hare, D. 2000. “The ‘Wheel of Misfortune’: A Taxonomic Approach to Human Factors in
Accident Investigation and Analysis in Aviation and Other Complex Systems.”
Ergonomics 43: 2001–2019.
Okstad, E., E. Jersin, and R. K. Tinmannsvik. 2012. “Accident Investigation in the
Downloaded by [LSE Library] at 04:00 06 January 2014