Professional Documents
Culture Documents
EUROCONTROL
TITLE
ELECTRONIC SOURCE
Path: L:\(Common)
DOCUMENT APPROVAL
The following table identifies all management authorities who have successively approved
the present issue of this document.
DAP-SSH
Safety Expert Oliver Straeter
DAP-SSH
Safety Domain Jacques Beaufays
The following table records the complete history of the successive editions of the present
document.
1.2.7 The simulation MIDAS (Man Machine Integration Design and Analysis System) ..........24
5.3.1 Systematic integration of the PROCOS approach as applied to CONOPS ................. 119
6. REFERENCES..................................................................................................123
The DRM research project was aimed at developing a simulation approach able to provide a
quantitative analysis of some critical operator’s activities considering the organizational
context in which they take place and the main cognitive processes underneath. The process
was able to provide a trial application of it in a specific case study in the ATM context.
This approach within the field of HRA is able to interact with standard risk assessment
methodologies in order to “foresee” the possible criticalities arising from human performance
in the ATC working contexts. Indeed, the simulator that has been used (named PROCOS;
Trucco & Leva, 2004), tries to integrate the quantification capabilities of the so called “first
generation” human reliability assessment methods with a cognitive evaluation of the
operator. The simulator shall allow the analysis of both error prevention and error recovery. It
should integrate cognitive human error analysis with standard hazard analysis methods
(Event Tree and Fault Tree) by means of a “semi static approach”.
The dynamism of the simulator proposed in the present work is focused on the cognitive
simulation and, therefore, on the cognitive flow chart. However the operator actions are able
to modify only the state of some equipment of the plant according to:
- a limited set of the states in which the equipment can be turned;
- the error modes identified through the Task analysis and extracted as a result of the
cognitive simulation of the operator;
- an explicit relation between the actions outcomes (correct execution or Error modes)
and equipment status modifications (the relation has been derived from the Task
analysis).
Its focus is mainly in conveying a quantitative result, comparable to those of a traditional
HRA method, taking into account a cognitive analysis of the operator as well. As a further
step the simulator considers the evaluation of error management as part of the overall
assessment from the same cognitive point of view
In order to prepare a trial application of the method a case study was considered important
for carrying out the analysis and for testing the method proposed on a specific application.
The case study refers to one of the Use Cases developed within the CONOPS framework for
the activity of Air Traffic Controllers.
1.1 Introduction
The aim for this chapter is to provide an overview of the well know and commonly applied
cognitive simulation tools and compare them underlying their advantages and limits.
A definition of cognitive simulation, also referred as simulation of cognition, has been given
by Cacciabue and Hollnagel (1995):
“the simulation of cognition can be defined as the replication, by means of computer
programs, of the performance of a person (or a group of persons) in a selected set of
situations. The simulation must stipulate, in a pre-defined mode of representation, the way
in which the person (or persons) will respond to given events. The minimum requirement to
the simulation is that it produces the response the person would give. In addition the
simulation way may also produce a trace of the changing internal mental states of the
person”.
In practice, a simulation is composed of three fundamental elements (Figure 1-1) that can be
considered necessary and sufficient for the development of a simulation of cognition:
- the theoretical cognitive model which defines conservation principles, criteria,
parameters and variable, that allow to describe cognitive and physical behaviour of
humans in a conceptual form;
- the numerical algorithms and the computational architecture, by which a theory is
implemented in a working computerised form;
- the task analysis technique, which is applied to evaluate tasks and associated
working context, and to describe procedures and actual human performances in a
formal way.
Cognitive simulation can be divided into two main types: qualitative and quantitative.
• Qualitative simulation describes the structure, the links and the logical and dynamic
evolution of a cognitive process, from the reception of an external stimulus to the
subsequent action. This type of simulation can be used for predicting expected
behaviours, in some well defined specific cases, where machine performance is also
simulated to the same level of precision.
• Quantitative simulation is based on the structure of a qualitative one with the addition
of a computational section and can be used to make numerical estimates of human
behaviour. The qualitative study in this case is often coupled with a simulation of the
performance of the system the operator has to interact with. The final outcome of a
quantitative simulation can be the list of the types of action or errors performed by the
operator while executing a specific task, or a probability value for each type of action,
calculated through the simulation runs.
In a wider context of cognitive simulation two different types of analysis can be distinguished:
retrospective and prospective.
• Retrospective analysis consist in the assessment of events involving human
interaction, such as accidents, incidents, or “near-misses”, with the objective of
detailed search for the fundamental reasons, fact and causes (“root causes”) that
have promoted and fostered certain human behaviour.
• Prospective analysis enables to predict and evaluate the consequence of human-
machine interaction, given an initiating event and boundary configuration of the
system.
Figure 1-5 SYBORG architecture (from Takano, Sasou and Yoshimura 1995)
The plant simulation models the power generation system, the controls, and the
alarms in the plant.
The operator model accounts for three operators: one is the leader of the team and
the others are followers with different roles. It is assumed that the leader does not
observe or touch the control panel but accumulates information of the plant via
communication. In particular, the operator model consists of following seven micro-
model:
- The attention micro model filters sensory information derived from machine
behaviour through the HMI and communication between operators through the
HHI.
The Human-Human interface (HHI) model performs three fundamental functions: the
task assignment, disagreement management and utterance management.
- The utterance management micro model, when communication takes place,
records the communication and sends itself at the receiver. The answer has to
feedback via HHI in order to confirm the success of the communication.
- The task assignment micro model incorporates the characteristics of team
behaviour related to the cooperation with each other to deal with a work that is
divided among operators.
- The disagreement management micro model simulates the characteristic of team
behaviour related to the fact that real operators communicate to exchange plant
information and their thoughts on the plant conditions, and they decide on
countermeasures that are thought to be the best ones for the plant. The
disagreement solution micro model considers several dynamic parameters
(arousal level, confidence) and static parameters (expertness, reliability) to
describe a variety of communication processes.
The simulator is well able to describe also interaction among members of the team, is
tailored on a specific application and in order to be used it needs the input coming from the
plant simulator for which it has been build.
Task Model is used to depict team tasks and to identify the associated context in
which the interaction between the operators team is developed. Complex task is
subdivided and assigned to an operator in accordance with your individual
peculiarities.
Event Model specifies the developments of a situation after that an initial event
occurs.
Team Model defines a factors team (organizational structure, individual peculiarities
of the operator that are the root of the communication). In normal operation the team
structure is predetermined and each member of the team knows what you have to do
and how you have to communicate. The collaboration pattern is dynamic because the
environmental conditions change and the operators can execute abnormal action.
Human- Machine Interface Model shows the layout of the control room and all
possible switches among states of the indicator. It is assumed that one panel is
assigned to each operator. When the incident/accident is occurring the operators help
themselves covering positions different from planned position.
The cognitive process of the team consists in four modules: identification of the symptoms,
decision making, planning and execution.
The current state of the system is identified depending on know-how of the operator or by
information arose from the other member of the team.
During Decision Making process the decisor-making chooses, in the bound of his authority,
an option from emergency list.
During the Planning process the planner, selected depending on his knowledge and
responsibility, chooses a procedure from list of the plans.
During the Execution process the executor, selected depending on his responsibility and
capacity, performs an operation from action list in according to operative procedure.
The performance of the cognitive process is outlined by timing fault tree like reliability
assessment of the system. The representation includes the communication between
members of the team and the interaction with dynamic context. For a quantitative
assessment is needed to know how the members of the team confront a normal event,
organize collaboration and produce the communication. Because there is not a database for
error taxonomy of performance of the team, the reliability values are assessed by simulation
results. Then it is possible assess the reliability of the team in a specific context by combining
this result with the timing fault tree.
Figure 1-11 Process model for Human Factors and Technical training integration (Mauri, et al 2001)
The making of the simulator consists in three steps: creation of the model, conceptual design
and implementation.
a. Creation of the model
The model carried out is the outcome of the integration of SHELL (Edwards, 1988) and
PIPE (Cacciabue, 1998) models. Furthermore, few suggestions are deduced from the
COCOM model (Hollnagel, 1993a,b).
o SHELL model
- The Initial Set Up has to define the initial conditions of the process; in particular it
should detail environmental PIFs, technician PIFs, objects and tools state.
- During the Simulation Run step same or all actions belonging to the same task
are executed. Each action is performed by software trough the Action Execution
Flowchart. The path goes along the flowchart depicts the decisions assumed from
virtual maintenance man.
- Generation of Output Data: at the and of process, the simulator indicates the
pathway followed, action codec, a brief description, commission and omission
errors (if they occurred), time action and time task, trend of the
Environment/Technician PIFs during the run.
c. Implementation
In order to implement the model above described is used Microsoft Visual Basic 6.
Data are managed through Microsoft Access.
Figure 1-16 The model PROCRU for individual crew member (Cacciabue 1998)
o The Simulation of the Aircraft includes Machine Dynamics, containing display and
control variables, and ATC/CREW model, which comprises communication with other
crew members and the external world, such as the air traffic control.
o The Simulation of the Individual Operator contains four main elements:
- The Monitoring Process, which handles display variables and incoming
communication and is affected by the situation and by psychological and external
factors such as stress.
- The assessment of the current situation (Information Processing), which is
influenced by monitored information, inherent knowledge and goals of the
operator.
1.2.7 The simulation MIDAS (Man Machine Integration Design and Analysis System)
MIDAS is a framework that accommodates models and knowledge structures for the
simulation of human-machine interactions during safety critical conditions (Corker and Smith,
1993). This workstation-based simulation system contains models of human performance
which can be used to evaluate possible new procedures, controls, and displays prior to more
expensive and time consuming human subject experiments. Several aviation applications
have demonstrated MIDAS’ ability to highlight procedural or equipment constraints and
produce human-system performance measures early in a platform’s lifecycle.
o World Model of the MIDAS supports a graphical representation for the physical
entities in an environment, using geometry either produced internally or imported from
a commercial CAD system. In addition to their physical aspects, the functionality of
controls and displays is captured by associating operating procedures and
behaviours to each graphical equipment component. These functional models are
TOPAZ is a simulator that can be used for analysing errors of Air Traffic Controllers. It is
based on a stochastic analysis framework which implies the following five activities:
a. Develop a stochastic dynamical model for the situation considered,
b. Where necessary develop appropriate cognitive models for human operators involved,
c. Perform the stochastic analysis necessary to decompose the risk assessment,
d. Execute the various assessment activities (e.g. through Monte Carlo simulation, numerical
evaluation, mathematical analysis, or a combination of these),
e. Validation of the risk assessment exercise.
The aim of the Topaz developers was to represent for the selected encounter scenarios the
results from the qualitative safety assessment in the form of a Stochastic Differential
Equation (SDE) on a hybrid state space. Unfortunately, the direct identification of the SDE
model would be very complicated for most ATM situations. In addition to a very large state
space of the corresponding SDE, there are many interactions between the many state
components. Therefore the developers shifted their attention towards a systematic approach
to develop an SDE instantiation through the development of a specific type of Petri Net: the
Dynamically Coloured Petri Net (DCPN), (a more detailed description is in the references:
M.H.C. Everdij, H.A.P. Blom and M.B. Klompstra 1997).
Operator Model
The Operator Model used consists of a contextual human task-network model, which is
formulated in terms of a DCPN, and which effectively combines the cognitive modes of
Hollnagel (1993) with the Multiple Resources Theory of Wickens (1992), the classical
slips/lapses model (Reason, 1990) and the human capability to recover from errors
(Amalberti and Wioland, 1997). In addition a model, for the evolution of situational awareness
errors, has been developed.
Scheduling of subtasks
The subtasks have been scheduled according to a defined strategy. The scheduling strategy
is expressed in the following (input) task parameters:
“Pre-emption For each subtask an assumption is made whether it may pre-empt another
subtask.
Concurrency For each subtask it is known whether it may be performed concurrently with
another subtask.
Initiation For each subtask the circumstances under which the subtask should be performed
are known.
The assumptions concerning Pre-emption and Concurrency are implemented according to
priority tables (Blom, Daams, Nijhuis 2000). These tables have been identified on the basis
of ATC human factors expert knowledge.
In terms of a stack of to-be-performed subtasks this scheduling principle can be formulated
generically as the following two rules:
The developers performed a comparison against statistical data for the ATCO routine
monitoring concept: the period to detect severe deviations such that a comparison with
available statistical data is possible (George et al., 1973). The ATCO performance model
developed in Topaz with appropriate Petri Net models for the other relevant components in
conventional ATC provided detection time results that agreed quite well with the measured
data. However the simulator presents a high complexity in its application and in the analysis
of the results.
Figure 1-20: General architecture of the IDAC dynamic response model (Mosleh and Chang 2004)
Any cognitive response of the operator or of the crew to an external situation perceived, is
translated into a problem statement or a goal, requiring solution. The model tries to cover
also why and how a response process is initiated and why and how a goal or a solution is
selected or abandoned. In order to go through the I-D-A process dynamically and in
response to external dynamics IDAC’s model has an internal engine comprised of the Mental
States with its set of states variables and rule of behaviour, plus the information processing
engine of the Working Memory. The stimuli are an individual perception of the external world.
The tendency to act on stimuli include the individual’s internal feelings pertaining to the
stimuli (e.g. time constraints, work load etc) These results in various psychological moods
(stress, alert etc.) that could affect the individual’s behaviour.
As described by the Authors (Mosleh and Chang 2004) the cognitive engine (its parameters,
factors and rules) act on the memory and generate a cognitive behaviour in response of the
scenario within which the activity has been initiated. Part of the dynamics of the operator
response is due to the change in the external environment. Perceived raw information is
temporarily stored into the Working Memory and serve as stimuli to change the Mental
States. IDAC covers the continuum of operator cognitive processes and actions in form of
In current application IDAC uses qualitative and quantitative scales in order to asses the
state of input variables and parameters (PSFs). Those elements are then used to calculate
the score for each alternative response. The completeness of the set of possible alternatives
is assumed therefore the probability of each alternative is calculated as the normalized score
of that alternative:
scorei
Pi = N
∑ score
j =1
j
Each PSF values ranges from 0 to 10. Static PSF are input to the model and quantified by
HRA analysts using conventional methods such s expert judgment and surveys. Dynamics
PSFs are function of the scenario and of the static PSFs.
In IDAC observable human actions are classified as errors in respect of the external
reference points in the following way:
1) the crew behaviour is compared with the system needs or actual system state
2) the crew behaviour is compared with the procedure requirement and
3) the procedure requirements are compared with the system needs.
A mismatch between the states and mutual requirements of any of the two reference points
can be classified as an error.
ERROR TYPE
The taxonomy used for the recovery phase has been proposed by Kontogiannis
(Kontogiannis, 1997), breaking down the error handling process in three phases: Detection,
Localization or explanation and correction.
- Error in Detection: the error happen in the phase in which the error is detected. The
detection can take place at different stages of the task execution:
o Detection in outcome stage
o Detection in execution stage
o Detection in planning stage
- Error in Localisation or explanation: after having detected the error, the operator tries
to identify its causes but he makes a mistake.
- Error in Correction: after having detected the error and identified its causes, the
operator develops and executes an action in order to recover the error but he makes
a mistake.
Uscita s im u la zion e
Uscita s im u la zion e
S tep
o Output
ERROR
TYPE HW Possible
ERROR consequential HW state
MODE state HW
Y Correct
Hardwa re/ Liveware
N
EM list
HW
interpretation?
6 8
N Y N Y
Remember a Remember a
step to be step to be
7 executed? executed?
20 10
N Y 9
10 Right step Er ror type N Y
9 in intention? MEMORY Ana lyse the
system? Error type
22
Er ror type N Y
MEMORY Ana lyse the Error type Not Done PERCEPTION
stimuli
system? 21 MEMORY
Not Done Other Than
Misordered 11 Less Than
Other Than Mor e Than
Error type N Y Later Than
11 Pa rt of
Hardwa re 12 Planned a
As well as DECISION Sooner Than
element OK? step to be
Er ror type N Y Sooner Than executed?
Opposite
Planned a 14 Not Done
12 DECISION La ter Than
step to be
executed?
Not Done
HW HW
N Scan error modes Y
Scan er ror type
N Y
modes PERCEPTION.
N Y 18 type MEMORY. Is there a ny Error
Right step
Cognitive
Is there any Err or mode availa ble?
in intention? mode a va ilable? FLOWCHART
N Y dello STEP Scan error modes
Error type
FLOWCHART Scan error modes 17 Right step in availa ble 17
DECISION
dello STEP available intention? 18 16
13
16 Er ror type
15 Misordered 13
Other Than DECISION
Expecte Actual
Ha rdware 14
Part of Scelta Error Scelta Error
element OK? Misordered
Sooner Than Mode Mode
Later Than
19 Other Than
15 Hardwar e 19
Pa rt of element OK?
14 Sooner Than
La ter Than
16
N Y
flow chart
Scan error modes
Scan er ror
type DECISION. N Y
d state state
Modes
Is there any Error
type DECISION.
mode available?
Is there any Error
FLOWCHART Scan error modes mode a va ilable?
dello STEP available 17 FLOWCHART Scan error modes
18 18 dello STEP available
16 17
Scelta Er ror
Mode
19
19 Scelta Er ror
Mode
TASK TASK
OPERATOR MODULE HMI MODULE
input
- the Operator Module consists of the cognitive flowcharts for Action execution and
Recovery phase, plus the correlation matrix between Error Type and Error Mode. The
critical underlying feature of this module is the mathematical model for decision block
criteria of the flowcharts.
- the Task Execution Module referred to the procedure that has to be simulated. In the
first version of PROCOS this module was based on the Event Tree.
- the Human Machine Interface Module made up of tables regarding the hardware
state and its connection with the operator actions (task executed or error modes
committed).
⎧ p x (1 − p)1− x per x = 0 or x =1
⎪
f x ( x ) = f x ( x, p ) = ⎨ (1.1)
⎪0 otherwise
⎩
where : 0≤p≤1
1 – p = q.
The probability of having “Yes” as a possible exit of the block can be expressed as [P(X = 1)]
and it is equal to p, while the probability of having the “No” exit is [P(X = 0)] equal to q.
In order to calibrate each decision block, the value of p, the success probability of the
cognitive process in the block, has been expressed as a function of the PIFs involved for the
block (Thus also in order to evaluate the influence for the context on the cognitive process).
The SLIM method has then been chosen (Wickens, 1992), in particular the expression that
relates Human Error Probability (HEP) with a Success Likelihood Index, which is a
logarithmic function of the PSFs involved (formula 1.2), since it is “generally accepted that
where:
HEP Æ Human Error Probability
SLI = f(PSF) ÆSuccess Likelihood Index
a, b Æ parameters
where:
wi Æ normalised weight of the i-th PSF for the cognitive process of the j-th block
ri Æ i-th PSF value
Nj Æ number of PSFs for the j-th block
N j
and ∑w
i =1
i =1
In the first application of PROCOS for each decision block the HEP value has been taken
from the THERP Data tables (Swain and Guttmann, 1983), chosen for an error type
At the beginning of a simulation process, the value and the weight wi for each PFS are
extracted as a random variable from a uniform distribution in an interval e-f and winf-wsup
respectively.
The strong point of this simulator is the medium-low application complexity, especially it is
very easier then each other quantitative method present in literature. Furthermore PROCOS
can be applied to many different fields with a few efforts to perform the necessary changes.
Of these five:
o CES and COSIMO do not have a model for the interaction between the operator and
the external environment
o TOPAZ and MIDAS do have such a model but do not have a model for the interaction
among operators
o IDAC has both models; however the interaction with the external environment is
based on a simulator of a nuclear power plant and its possible response to different
actions. Therefore the adaptability of the method to the ATM case study can be quite
expensive.
Notes:
- Yes* Procos do have a model for the interaction between operator and environment
however is quasi-static, which means that the behaviour of the external plant is not
simulated but it is taken into account using:
o a limited set of the states in which the equipment can be turned;
o an explicit relation between the actions outcomes (correct execution or Error
modes) and equipment status modifications (the relation has been derived
from the Hazop analysis).
- Sequential**The cognitive model is based on an information processing approach,
however the model comprise a cognitive model for possible recovery phase of action
- Yes***Interaction between operator is taken into account through the use of part of
the cognitive flowchart especially dedicated to communication processes.
ConOps for 2011, as already said, provides a key input for the OATA project. The logical
architecture in OATA is developed using use cases which are then realised though Unified
Modelling Language process. The roles and responsibilities of the actors involved in each
use case have been identified in order to make sure that the use cases completely capture
the interactions between the concerned actors and the ATM system, The operational context
in which the actors and the system interface is provided as an important input. The use
cases also serve to provide a more detailed elaboration of particular aspects of the
scenarios, especially “what-if” situations (the alternative flows). The scenarios and use cases
shown in Annex I provide representative examples.
The use cases place themselves within a specific ConOps content: the ATM Process Model.
The process model has been developed because it presents some benefits as far as the
integrity and consistency of the approach is concerned in particular:
Aircraft Operator 1 2 3
Airport Operator 4 5 6
Airspace Management 7 8 9
ATFCM 13 14 15
In particular the Box number 12 in Figure 2-1 ad example focuses on the Air Traffic Control
Tasks and it is at a Tactical Level. It is better presented in Figure2-3 where the Air Traffic
Control activities in the so called “Day of Operation” phase, is displayed
Pre-Tactical S P T
AO
Continuous Iterative Application Process AP
AM
AT
AF
Figure 2-2: ATM Process model Air Traffic Control at a Tactical Level
If we then focus on the process highlighted in clear blue we are able to detail a bit more the
exact area on which the connection with the activity of the present project (Dynamic Risk
Modelling) can be focused (Figure 2-3).
The use cases that have been selected as examples in order to assess their level of detail
and the compatibility with a possible Cognitive Simulator model are placed within the Sub
processes of the Activities that refers to Request Information, Instructions and Clearances
(Figure 2-3).
DO5
Real Time Air Picture
Human Resources
Allocation
En-route Sequence
Arrival Management
Network Operations Plan
Within EURCONTROL Human Reliability Analysis has been already carried out with some
“in house’’ and ad hoc methods. A more systematic approach is under development to make
better use of incident analysis data collected with the HERA retrospective tool. This
approach, called HERA-Predictive keeps the taxonomy and qualitative structure of HERA
retrospective and complements the data collected with a statistical approach, which allows
using the data in predictive safety assessments (Isaac, Van Damme & Sträter 2004). The
approach is an adaptation of the CAHR approach developed in the nuclear domain to the
ATM environment (Sträter 2000). Currently this approach is further developed under the
heading “Virtual Advisor” as the approach should support safety assessments as some kind
of virtual expert. The following outlines how the HERA-Predictive approach in principal works
based on the retrospective analysis of events.
Regarding the structure of the prospective and retrospective HERA approach, a research
project has been set up at EUROCONTROL that reviewed the theoretical and practical
literature to determine the best conceptual framework upon which to base an ATM incident
analysis tool. The conceptual framework chosen is that of human performance from an
information processing perspective (Shorrock, Kirwan 2002; Isaac et al., 2003). The
technique and the related taxonomy are model-based. A model in fact “allows causes and
their inter-relations to be better understood. An error model provides an ‘organizing principle’
to guide learning from errors. Trends and Patterns tend to make more sense when seen
against the background of a model and more ‘strategic’ approaches to error reduction may
arise, rather than short term error reduction initiatives following each single error event.”
(Shorrock et al 2003).
The main purpose of the HERA (retrospective and prospective) classification of human error
in ATM are:
“(i) Incident investigation - To identify and classify what types of error have occurred when
investigating specific ATM incidents (by interviewing people, analyzing logs and voice
recordings, etc.).
(ii) Retrospective incident analysis - To classify what types of error that have occurred within
present ATM systems on the basis of incident reports; this will typically involve the collection
(iii) Predictive error identification - To identify errors that may affect present and future
systems. This is termed Human Error Identification (HEI). Many of the classification systems
in this review are derived from HEI tools.
(iv) Human error quantification - To use existing data and identified human errors for
predictive quantification, i.e. determining how likely certain errors will be. Human error
quantification can be used for risk assessment purposes.” (Shorrock et al 2003).
In order to exploit the data for prospective assessment, the approach of HERA-Predictive
(Isaac, Straeter, Van Damme 2004) was designed based on the experiences of using event
data for safety assessment in nuclear (Straeter 2005). This approach should overcome the
current situation that, as far as Human Error Quantification is concerned, the materials
available for prediction are mostly experts’ judgment. The lack of ad hoc data for the
quantification process is therefore one of the main issues affecting HRA applications in Air
Traffic Management.
The development of a numerical simulator able to represent the performance of the controller
or the team of controllers in a specified context can provide a useful mean for gathering data
and analysis safety performance of a system. In fact, it could reproduce a sufficient number
of trials for gaining an estimation of Human Error Probabilities (HEP).
The cognitive simulator (PROCOS) for supporting human reliability analysis in complex
operational context developed within Politecnico di Milano comprises two cognitive flow
charts reproducing the behaviour of a process industry operator. The flow charts are based
on a model within an information processing perspective very similar to the one underlying
the HERA classification. Therefore, it has been possible to modify the simulator in order to
take into account a more detailed insight of the context of analysis (ATM) and obtain suitable
data for a possible quantification process. In the following paragraphs, the HERA framework
In order to classify and analyze errors in HERA the main factors to be described are shown
in Table 2-1.
Table 2-1: Main Factors to consider for analyzing human error with HERA (see e.g., Shorrock et al 2003).
Taxonomy Description
Error
Error Type What keyword can be applied to the error (including rule
breaking and violation), in terms of timing, selection or quality
of performance or communication?
Error Detail (ED) What cognitive process was implicated in the error?
Error Mechanism (EM) What cognitive function failed, and in what way did it fail?
Information Processing Levels How did the error occur in terms of psychological
(IPs) mechanisms?
Context
Information & Equipment What was the topic of the error, the equipment used in the
error or the information involved? (e.g. what did the controller
misperceive, forget, misjudge, etc.?) What HMI element was
the controller using?
Contextual Conditions (CCs) What other factors, either internal or external to the controller,
affected the controller’s performance?
The cognitive domains covered by the Information Processing activities considered in the
accident analysis technique are:
- memory;
Figure 2-4: Enhanced model of human information processing used in HERA (see e.g., Shorrock et al
2003).
The model used as the main skeleton, illustrated in Figure 2-4, is extensively based on the
one proposed by Wickens (1992). The analyst uses HERA (for the Retrospective accident
analyses) following several steps associated to specific flow charts. The steps are:
a. Defining the error type.
b. Defining the error or rule breaking or violation behaviour through a flowchart.
c. Identifying the Error Detail through a flowchart.
d. Identifying the Error Mechanism and associated Information Processing failures
through flowcharts.
e. Identifying the tasks from tables.
f. Identifying the Equipment and Information from tables.
g. Identifying all the Contextual Conditions through a flowchart and tables.
Examples of the flow charts used for identifying the Error Detail can be found in Reference
(Shorrock et al 2003).
The focus of the simulator is mainly in conveying a quantitative result, comparable to those of
a traditional HRA method, taking into account a cognitive analysis of the operator as well.
The Information Processing Level and the Error Mechanism are embedded in the structure of
the simulator, while the other elements constitute inputs for the simulation runs.
The model used for configuring the flow chart representing the operators is based on a
combination of PIPE (Cacciabue 1998) and SHELL. PIPE represent the process of human
cognition according to the “Minimal Modelling Manifesto” (Hollnagel 1993) “A Minimal Model
is a representation of the main principles of control and regulation that are established for a
domain-as well as for the capabilities and limitations of the controlling system”. PIPE is
based, in fact, on the four main cognitive functions:
- Perception
- Interpretation
- Planning
- Execution.
The cognitive functions are influenced or triggered by input parameters such as hardware
stimuli and context stimuli. The human cognitive Path followed through these functions leads
to a response (output). The cognitive process involved makes use of the Memory/Knowledge
Base and Allocation of resources of the individual.
SHELL (Software Hardware, Environment, Liveware ,Liveware) (Hawkins 1987) has been
used for organizing the information regarding the context and the interactions between the
controller and other members of the ATM team or the pilots (Liveware), the equipment
(Hardware), the procedures (Software) and so on.
The combination of these two models shows a high numbers of commonalities with the
Cognitive model proposed in HERA. The development of flow charts for representing the
The above elements are perfectly in line with the elements previously outlined within the
HERA approach. The task execution module will be tailored not on an Event Tree but on a
task analysis represented through a flow chart as well. The Performance shaping Factors are
in the HERA Framework the Contextual Conditions, and the Hardware-Software involved in
task execution corresponds to what in HERA has been referred to as Information &
Equipment.
The main Output of the simulator is to provide a probability value for correct executions or
failures in respect of ATCs tasks identified as critical (with multiple trial generation) and a
probability value for the corrective action in the recovery phase as well. These probability
values depend on the CCs, directly connected to the decision boxes of the flow charts
through the decision block criteria. In this way it is possible to take into account a cognitive
point of view in the Human Error Probability generation, enabling to consider a more
formalized connection with the CCs, which are the key points for identifying organizational
corrective or preventive actions.
In this section it will be explored the applicability of the cognitive simulator developed within
the present PhD in Politecnico of Milano, already presented in the second chapter of the
present project, and its effectiveness in the analysis of one of the ConOps Use Cases.
The level of detail of the uses cases in a comparison with the level of detailed required for
the task analysis to be input to the cognitive simulator selected for the trial application
(PROCOS) will be discussed.
Furthermore it will be presented the HERA approach, its taxonomy and HMI level of
description for carrying out some necessary modification to the simulator in order to try and
captures the elements that HERA is able to take into account. The changes performed on the
simulator for the trial applications will be discussed and presented in detail so as to discuss
the feasibility of the application and its possible results.
The task analysis flow chart shown in annex II_A, has been developed using MS Viso (MS
Visio is also a compatible software for developing the task analysis in UML). The flow chart is
then broken down as records of an Excel table as the one reported in annex II_B. Every task
is identified through a synthetic ID Code and every exit of a sub-step and of events is in a
column (Correct or Error Type-Error Mode), where the next sub-step that has to be lined to is
reported.
It is important to underline that the flow chart for the task analysis has not to be confounded
with the flow chart developed for the information processing activities of the operator
(cognitive flow chart). To every decision blocks of the task analysis flow chart in fact is
assigned a certain exit (correct execution or error mode) according to the run of the cognitive
flow chart, which simulates the actual human execution of the single sub-steps. All the
possible exits of the sub-steps are monitored and the effects on and from the equipment
involved in the task are considered up to the level of detail required by the Use Case itself.
In this section, a detailed description of the flow chart that depicts the task will be provided.
Furthermore, the assessment of the external events present into the task will be discussed.
Sub-steps
Sub-steps, also called as Sub-tasks, constitute the human actions within the task. Single
subtask has to be configured as “single unit” of human actions for which the underlying
cognitive flowchart is compatible. Each possible error type and error mode outcome should
be explored. In accordance with ConOps use cases, during the development of this project it
has not been considered the effect from and on hardware equipments. Therefore the sub-
step is defined by:
- code and description;
- type of actions (communication or action triggered by hardware stimuli);
- type of cognitive path required for performing the sub-step (skill, role or knowledge
task); frequent step has been added in order to better classify the sub-tasks that are
very frequently performed by ATCO following fixed role;
- all possible exits of the sub-step (correct or error type and error mode) with the
following step. Visualisation of the sub-step within task analysis flow chart.
Figure 3-2: Sub-step definition using PROCOS interface. Click on the “available errors” bottom, it is
possible select the Error Type. In the same way it is possible choose the Error Mode.
Table 3-1: External events and pilot actions within the task
Prob.
Code Description Prob. Source
Value
E12 Pilot lands the aircraft anyway 1/100000 0.000010 Expert judgement
Expert judgement
E13 Plane B technically able to vacate 1-1/10000 0.999900
Malpensa
Expert judgement
E2 Aircraft A technical able to vacate 1-1/50000 0.999980
Malpensa
TP11_E
Pilot B lands the aircraft safely 1-1/1000 0,99900 Expert judgement
1
TP8_E1 Is the Pilot aware of his position 1-1/30000 0,999967 Expert judgement
The simulator should be able to assess the probability of the deviations from the main flow by
means of multiple trials. The task analysis and the sub steps of which it is composed
therefore constitute a very important input for the simulation process.
The main difference between the ConOps Use Case and the task is the presence of data-link
system. In fact, even if in the Use Case there is the data-link system, the task selected for
the trial application assumes that the data-link system is not available, therefore the
instruction and the clearance are exclusively issued by voice.
Pilot B could reject the planned runway exit by requesting a different one (event E10).
If Pilot B does not request a different runway exit, the task continues through readback-
hearback process. Otherwise, ATCO has to understand and process the pilot request and
then he can accept the request or confirm the runway exit proposed at the same beginning.
In both cases there will be a readback-hearback process. Now, the task proceeds with the
event “Pilot B is able to land” (event E20). For the trial application, we have decided against
simulating the case in which Pilot B is not able to land. In this case Pilot B should inform the
Tower Runway Control that instructs the pilot to a missed approach. Therefore, Pilot B lands
the aircraft.
If aircraft is not technically able to vacate, the pilot communicates the problem and the ATCO
verifies that the runway is obstructed. Any error in these steps different to a delay in
Correct task
The possible outcomes of the correct task are:
Failed task
The possible outcomes of the failure task are:
i. ATCO irrevocable failure.
ii. Pilot B irrevocable failure.
iii. Pilot B is unable to vacate the runway and ATCO does not timely detect.
iv. Pilot B is issued a landing clearance and assisted for vacating the runway. However
Pilot B vacates the runway on an unplanned runway exit and ATCO does not detect.
v. Warning in readback-hearback process in the aftermath of the missed approach
instruction.
vi. ATCO does not verify that the aircraft B has vacated the runway (warning).
Therefore the Task Execution Module needs to be built up according to the scenario to be
simulated and the specific task to be analyzed.
Procos currently accepts input information about the scenario that follows the logic structure
of an event tree. Therefore the task analysis should be developed with a configuration able to
match the event tree logic structure.
Furthermore the step of the task has to be simple single subtask configured as “”single unit”
of human actions for which the underlying cognitive flowchart is compatible. Each possible
error types and error modes outcome should be explored and the effect form and on
hardware equipment is of course part of the task analysis as well.
As already stated in chapter three of the present report, the main purposes of HERA
classification of human error in ATM are:
“(i) Incident investigation - To identify and classify what types of error have occurred when
investigating specific ATM incidents (by interviewing people, analyzing logs and voice
recordings, etc.).
(ii) Retrospective incident analysis - To classify what types of error that have occurred within
present ATM systems on the basis of incident reports; this will typically involve the collection
of human error data to detect trends over time and differences in recorded error types
between different systems and areas.
(iii) Predictive error identification - To identify errors that may affect present and future
systems. This is termed Human Error Identification (HEI). Many of the classification systems
in this review are derived from HEI tools.
(iv) Human error quantification - To use existing data and identified human errors for
predictive quantification, i.e. determining how likely certain errors will be. Human error
quantification can be used for risk assessment purposes.” (Isaac, Shorrock et al 2003).
The possible use of the cognitive Simulator PROCOS may fit in all the above purposes, a
practical feasibility study in particular showed that:
- As far as the classification used for the scope of incident investigation – PROCOS
presents an Error Classification system that perfectly fit the HERA Classification
system. The Cognitive Flow chart of PROCOS has been slightly modified in order to
take into account some peculiarity of the cognitive work domain faced by ATCOs.
Therefore a specific section for Communication and for Rule based frequently
performed task have been introduced.
- This in turn allows the use of the data coming from past accident (Retrospective
incident analysis) for calibrating the cognitive flow chart used within the simulator as
showed in a later section of the document.
- Furthermore the task analysis and the level of detail required within the task analysis
context enable to structuralise the identification of possible Human Errors within
specific ATCO tasks (Predictive Error Identification) that are perfectly coherent with
The model used for configuring the flow chart representing the operators in PROCOS is
based on a combination of PIPE (Cacciabue 1998) and SHELL. PIPE represent the process
of human cognition according to the “Minimal Modelling Manifesto” (Hollnagel 1993) “A
Minimal Model is a representation of the main principles of control and regulation that are
established for a domain-as well as for the capabilities and limitations of the controlling
system”. PIPE is based, in fact, on the four main cognitive functions: Perception,
Interpretation, Planning, Execution.
The cognitive functions are influenced or triggered by input parameters such as hardware
stimuli and context stimuli. The human cognitive Path followed through these functions leads
to a response (output). The cognitive process involved makes use of the Memory/Knowledge
Base and Allocation of resources of the individual.
The taxonomy chosen for describing the various Error Types is taken by Wickens (1992),
thus it perfectly fits the HERA framework:
- Error in Perception: errors regarding issues related to the picking up and
understanding of information.
- Error in Memory: errors related to both short-term storage and more permanent
information based on the person’s training and experience.
The only category missing was “Violations”. Therefore the Cognitive Flow charts have been
modified in order to take into account the error Type “violation”. Furthermore other
modifications have been implemented in order to take into account specific issues related to:
- communication
- frequently performed tasks, that are mainly on a “rule based level” (Rasmussen
1987).
The cognitive flow charts are presented in Annex III. The decision blocks are those coloured
in pink; other possible blocks are reported and briefly described in Table 6.
As already mentioned the example of a possible quantitative analysis using PROCOS was
aimed at calibrating the simulation process on Data available by the analysis of past accident
using HERA retrospectively.
Therefore in Table 3-1 it is reported the possible correspondence between PROCOS
calibrated decision blocks (and therefore error types) and the Error Types (and Error Modes)
reported in HERA.
Communication heard/understood
41 calibrated decision block PV-EM: Mishear
correctly by ATCO
42 ATCO asks to pilot for clarification calibrated decision block PV-EM: No Auditory Detection
PV-EM: Mishear
46 Clarification successful calibrated decision block
PV-EM: Hearback Error
We can briefly summarise it here by saying that each decision block has two possible exits:
“Yes” and “No”. The exit process is stochastic and it depends on the PSFs (Performance
Shaping Factors) values and the influence they have on each decision block.
⎧ p x (1 − p )1− x per x = 0 or x =1
⎪
f x ( x ) = f x ( x, p ) = ⎨ (3.1)
⎪0 otherwise
⎩
where: 0 ≤ p ≤ 1
1 – p = q.
The probability of having “Yes” as a possible exit of the block can be expressed as [P(X = 1)]
and it is equal to p, while the probability of having the “No” exit is [P(X = 0)] equal to q.
In order to calibrate each decision block, the value of p, the success probability of the
cognitive process in the block, has been expressed as a function of the PSFs involved for the
block (Thus also in order to evaluate the influence for the context on the cognitive process).
The SLIM method has then been chosen (Wickens, 1992), in particular the expression that
relates Human Error Probability (HEP) with a Success Likelihood Index, which is a
logarithmic function of the PSFs involved, since it is assumed that changes in human
responses induced by changes in external conditions can be described by a logarithmic
relationship”(Fujita & Hollnagel, 2004):
The data available from the HERA database in fact can provide indications for the rate of
occurrence of specific error types and also for what performance shaping factors (that is to
say Contextual Conditions in the HERA taxonomy) played a negative role in a certain event
where a certain type of error occurred. We will now refer to the PSFs calling them Contextual
Conditions (CCs) in order to be coherent with the HERA taxonomy.
The HERA dataset used for this project has the following characteristics:
- Number of recorded accident/near miss events: 62
- Number of recorded ACTO errors: 91
– Perception & Vigilance 38
– Memory 37
– Planning & Decision Making 36
– Response Execution 10
- Number of recorded occurrences of Contextual Conditions (CCs): 130
- Number of movements during the reporting period: 4million (estimate)
- Level of analysis of Contextual Conditions: Main Category
The main category of the contextual conditions available in the HERA taxonomy are the
following:
1. Pilot-Controller Communications
2. Pilot actions
3. Traffic and airspace
4. Weather
5. Documentation and procedures
6. Training and experience
7. Workplace design and HMI
8. Environment
9. Personal Factors
10. Team factors
11. Organisational factors
Their subcategories are listed in Table 3-4.
We will now refer to the PSFs calling them Contextual Conditions (CCs) in order to be
coherent with the HERA Taxonomy. The CCs chosen from the above table for each category
that have to be used for the Simulation Trial are listed in Table 3-4.
They will be used for calculating an index similar to the SLI called the Failure Likelihood
Index (FLI). The formula (3.2) is the analogous to the one used for the SLI presented in
chapter 1. The main difference is that the weight of the effect that the CCs can have on the
situation is seen on a negative perspective, which is to say it takes into account only those
Contextual Conditions whose presence negatively affect the outcome of a task.
N j
FLI = ∑ ( wi ⋅ ri ) (3.2)
i =1
where:
wi Æ normalised weight of the i-th CC for the cognitive process of the j-th block
ri Æ i-th CC -value
Nj Æ number of CCs for the j-th block
N j
and ∑w
i =1
i =1
For the trial application the ri value of each CC is a Boolean value that can assume value 1 or
0 (Present or not Present) since this is the only information available at the moment from the
Where:
ri = Percentage of errors in a situation given all situations of the same type in the data base
μ= 0.5 Adjustment of the Location of the crossing point (0.5 assigns rational processing)
sn= 0.075 Empirical parameter to adjust the slope of the ogivian-curve
e= natural exponent (2.718)
This curve has been proposed by O. Straeter (2005) for relating the absolute HEP=n/N with
the empirical data collected by accident databases. In a method named CAHR, O. Straeter
(2000) found out that the Rasch equation was revealed as an optimal calibration function
since it was the closest line to approximate the relation between the percentage of errors in a
certain situation given the number of situations of the same type in the database and THERP
data about absolute probability for a given type of error as shown in figure 3-4.
0 1
ri
Figure 3-4 : Percentage of errors in a situation given all situations of the same type (i) in the data base
compared to THERP HEP values for the same error types (Straeter 2000)
FLI − μ '
sn '
e
ri = FLI − μ ' (3.4)
1+ e sn '
This formula needs to be calibrated, that is for each Decision Block of the Cognitive
Flowchart we need to identify the parameters μ ' and sn ' using the empirical data available in
s n
' .
Table 3-6 reports an example of the Excel table used for the calibration of one decision
block.
Three anchor points are fixed.
⎧ FLI = 0
- ⎨ −3
that represents the best possible working conditions;
⎩ir = 10
⎧ FLI = 1
- ⎨ that represents the worst possible working condition;
⎩ ri = 0,9
⎧ FLI *
- ⎨ that represents the normal working condition.
⎩ri *
The third point is extracted from HERA data. The procedure is set out in the following steps:
nevi
P ( EM k ) =
N t (total number of operations in the observation period )
P(CCi ) is the probability of occurrence for CCi:
block;
( N ev − nevi ) is the number of events not linked to the block;
make it fit better to the proposed calibration value by applying the least square method.
Table 3-5: example of the series obtained for the curve that relates ri with HEP (Formula 3.3) and the one
that then relates FLI with ri (Formula 3.4)
FLI = 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
HEP = 0.00129 0.00162 0.04011 0.99289 0.99859 0.99872 0.99873 0.99873 0.99873 0.99873 0.99873
1 1
0.8 0.8
ri =
0.6 0.6
HEP =
0.4 0.4
0.2 0.2
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
FLI
Figure 3-5: Graphic that shows how the values of ri and HEP modify as functions of FLI
ri (
FLI "block
Possible HERA contextual Value normalized
ID clean (Failure error" Square
μΙ
I
Description correspondent conditions (0 or Occurrence Weight weight wi x y FIT sn
Block value Likelihood / tot Error
ET (CCs) 1) (PIFs)
index) error
task)
Pilot -
PV-EM: No Controller
Auditory Detection 2 Communication 0 2
PV-IP:
Operator Distraction/Pre-
monitoring occupation 3 Weather 1 0 0.000 0.000
the system PV-EM: No Documentation
1 (perception detection of visual and 0.2196 0.3871
information 16 Procedures 0 5
of not
alerted PV-IP: Visual Training and 0.2196 0.3871 0.3871 2.73542E-14 0.23524 0.03400
items) Search Failure 0 Experience 0 5
Workplace
design and
HMI 1 10 0.833 0.218
Table 3-7: PSFs chosen from the HERA CCs to be used for the Simulation Trial represented in the
questionnaires provided to the ATCOs interviewed
Importance
PSF Description from 0-100
R/T interference
Post Peak Traffic Period just after a high traffic load situation
Level of Knowledge
Training and
experience
Level of Experience
Conflicting information
Inaccessible information
Workplace Design and HMI
Nuisance information
Poor Display
Environme
Distraction-job related or non job related i.e. phone calls, chatting with a colleague
Work Scheduling
The values collected are then used for establishing the range in which the mean value of the
importance of each CC (obtainable through the use of HERA data as well) can actually vary.
For each CC in fact, given the mean obtained from the HERA data and the value obtained
from the interviews it is possible to calculate the sample mean and the sample variance in
the following way:
n
∑x i
- Sample mean: x = 1
(3.9)
n
n
∑ (x i − x) 2
- Sample variance: S 2 = 1
(3.10)
n −1
Where: “n” is the number of answers belonging to the main category analysed;
“ x ” is an estimator of the population mean μ (mean value of the weight for the
CC analysed);
These quantities are measures of the central tendency and dispersion of the data collected.
is distributed as t n −1 where (n-1) are the degrees of freedom of the distribution. Therefore,
we can obtain a 100*(1-α) percent confidence interval for the mean value of the weight,
which is:
s s
x − tα ≤ μ ≤ x + tα (3.12)
2 n 2 n
This provides an interval within which the value of the CC weight in question lies during a
specific simulation run. After the percentage of the confidence interval is chosen, in fact, the
value of wi of each simulation run will be extracted within the related range of values.
Therefore at each run the weight of each CC is randomly extracted by its related interval
(assuming a uniform distribution). This values are then used for evaluating the FLI (formula
3.2), taking into account that the presence of the CCs depend on the scenario that we want
to simulate, and from resulting FLI we evaluate, using formula (3.4) the correspondent ri. This
value of ri is then substituted into the ogivian-shaped curve expressed by the formula of
Rasch (3.3) and the final value for the Q of the bernoullian process described in the system
(3.1) is then finally used for deciding stochastically exit Yes or No for each simulation run.
This process needs to be performed for each decision block of the Cognitive simulator since
each decision block may present different values.
99
Scenario Scenario Scenario Scenario
1 2 3 4
External Contextual Conditions
Weather 0 1 0 1
Traffic and airspace 0 1 0 1
Pilot/controller communication 0 1 0 1
Human and Organizational Factors
Training 0 0 1 1
HMI 0 0 1 1
Work Environment 0 0 1 1
Team Factors 0 0 1 1
Personal Factors 0 0 1 1
Organization Factors 0 0 1 1
Table 4-1: Characteristics of the four scenarios chosen for the trial application.
For each scenario, different ranges of Failure Likelihood Index (FLI) have to be simulated
corresponding to the different contributions that different sets of CCs have on the FLI
value. At a first glance the extension and the location of these ranges, reported in Figure 4-1
for a limited set of decision blocks of the Cognitive Flowchart, make it reasonable to
expect a low deviation of the simulation results.
100
-
Figure 4-1: Range of scenario simulation for some characteristic decision blocks of the Cognitive Flowchart.
4-101
4.1.2 Number of repetition of simulation runs
In any experimental design problem and in any design of simulation campaign, a critical
decision is represented by the choice of number of repetitions of simulation runs. The type
of results to be taken out from the simulation, the structure of the task and the probability
of the events within the task strongly influence both the minimum number of cycles within
a simulation run and the minimum number of replicates runs requested to have
statistically significant results.
The aim of this project is to estimate a Human Error Probability (HEP) that, in the nominal
case where Contextual Conditions do not play a negative role, we expect to be not higher
than 10-3.
Some uncertain events generated during the execution of the simulated task – i.e.
handling of the aircraft landing - have probabilities of occurrence between 10-4 and 10-5
(Table 3-1); thus, some branches of the task flowchart have a very low probability of
occurrence.
In order to have a meaningful number of occurrences for any possible path of the task and
according to an empirical rule that suggests to set a number of cycles at least of one order
of magnitude higher than the lower probability of occurrence, it has been decided to
perform one million cycles for each simulation run, corresponding to one million landings
on the runway. Figure 4.3 shows an estimate of the time taken to perform a single
simulation run. It is possible to note that the simulation time is strongly dependent on both
the number of simulation cycles and the computing power of the computer used for the
simulation.
Finally, considering the available time for performing the entire simulation campaign, it has
been decided to execute 20 repetitions for each scenario, sufficient to build significant
statistics.
Simulation time
21.07
20.20
19.12 Core Duo @ 2.00 GHz, 1GB of RAM
17.16
Pentium 4 @ 1.70 GHz, 374MB of RAM
15.21
time [h.min]
13.26
11.31
9.36
8.20
7.40
5.00
5.45
3.50
1.50 1.45
1.55
0.05
0.00
1,E+02 1,E+03 1,E+04 1,E+05 1,E+06
number of cycles for simulation run
Figure 4-2: Dependency of the simulation time on the number of simulation cycles.
103
- Reop_hd, Rep_stimuli, Rep_block, Rep_err_mode and Rep_err_type indicate
if the error is related to hardware, stimuli, flowchart block, error mode or error
type respectively;
- Rep_value describes the type of error occurred;
- Rep_data and Rep_time indicate when the error was recorded;
- Rep_user indicates username of the analyst who started the simulation.
Diagnostic Report
Rep_ Rep_acti Rep_si Rep_s Rep_ Rep_sti Rep_blo Rep_val Rep_err_ Rep_err_ Rep_use
Rep_task Rep_data Rep_time
Id on mcode ubtsk hd muli ck ue mode type r
Table
Code
1 ER01 sim_01 TSK_01 E10 SubTask 08/04/2006 0.00.02 MASTER
E10 did
not find!!
ii. Total statistic report. This report records the information at the level of detail of the
task. This means that provides all the information needed to build the total statistics
of the task (see the next section). An example of some records of the total statistic
report is shown in Table 3.
Report_T
- The first and the second column (Rep_id and Rep_simcode) are the same of
the diagnostic report;
104
- Error! Reference source not found. -
105
Report_D
In Table 4-4 the column Rep_id, Rep_simcode, Rep_data, Rep_time and Rep_user
have been hidden because they are the same of the Diagnostic report and Total
statistic report ones.
- Rep_task is the code of the task simulated;
- Rep_subtsk is the code of the subtask simulated;
- Rep_hw is the code of the hardware involved in the subtask (in the trial
application there is no hardware involved, then each row of this column is NO);
- Rep_stimuli indicates the code of the type of hardware stimuli that has
triggered the subtask;
- Rep_block indicates if the subtask is a communication process (C) or not (R);
106
- Error! Reference source not found. -
Table 4-5: Example of some records of the Block statistic report of PROCOS.
Bock_statistic_report
Re Re
Re
R Rep p_ Rep p_
Re Rep Rep Rep_ Rep Rep Rep Rep_d Rep Rep Rep p_
ep _co bl _des qt Rep_q Rep_u
p_t _su _blo desbl _qta _qta _blo esblck _qta _qta …… _dat ti
_I dsi oc blck a_ ta_10n ser
ask btsk ck_1 ck_1 _1y _1n ck_2 _2 _2y _2n a m
d m k_ _10 10
e
10 y
107
- Rep_subtsk is the code of the subtask related to the block recorded;
- Rep_block_1 to Rep_block_10 are the number of selected blocks;
- Rep_desblck_1 to Rep_desblck_10 report the description of each block;
- Rep_qta_1y and Rep_qta1n to Rep_qta_10y and Rep_qta10n are respectively
the occurrences of exit “Yes” (y) or “No” (n) of the blocks (from 1 to 10
maximum).
The Diagnostic report and the Detailed report have been used for debugging and during
the calibration process of the simulator. Indeed, while the Diagnostic report gives an
immediate index of errors in data entry (omission of requested inputs or incompatibility
among values of different inputs), the Detailed report, coupled with the Total statistic
report, is useful to detect both the cause of repetition of simulation exits and the errors in
the calculation process within the computer program, if any.
After the calibration phase the Diagnostic report and the Detailed report have not been
used any more. Indeed, the Total statistic report and the Block statistic report are enough
to calculate the statistics of interest for this work. The computer program allows to de-
select the need of the Detailed report in order to reduce the computational load and, thus,
the simulation time.
108
- Error! Reference source not found. -
S1 S2 S3 S4
Thus, the different exits from the task are recorded in order to distinguish the irrevocable
and not irrevocable (warning) errors (Figure 4-3).
The exits of the task have been ranked considering the magnitude of potential negative
consequences as follows:
109
Type of exit Exit task
Two more “failure end states” of the task refers to actions of the pilot:
To complete the analysis of the task, the need of recovery actions of the ATCO has been
studied. Two different types of recovery have been identified:
- Recovery by procedure: it is placed at the task analysis level and represents the
possible recovery procedure described as a “deviated” path within the task
analysis;
- Recovery by clarification: it is placed at the cognitive flowchart level and
represents the recovery capabilities provided by the communication process.
For each scenario, the average of occurrences of recovery actions and the average of
occurrences of correct recovery for both “by procedure” and ”by clarification” are recorded.
Then the absolute probability of recovery action [recovery/movements] and the absolute
probability of recovery failures [failures/recovery] are calculated as shown in Figure 4-4.
110
- Error! Reference source not found. -
4-111
After the analysis of the task, the occurrences of single error types have been analysed
(Figure 4-5). Errors in the communication process (readback-hearback process) and other
errors during task execution (hardware stimuli) are shown in the same figure in order to
underline their dependency.
Figure 4-5: Computational model for the assessment of the probability of different error types.
Each probability is referred to the number of movements and then, using an estimate of
number of movements in one year, it is also possible to refer the error probability to the
operational time.
112
- Error! Reference source not found. -
113
5. ANALYSIS OF THE RESULTS FROM THE CASE STUDY: AN
EVALUATION OF THE EXPERIENCE GAINED
This chapter will present the results of the simulation campaign. The discussion of the
results should lead to some first conclusions referring both to the case study and to the
simulator performance, the possible strengths and weaknesses of the current simulation
approach. Furthermore the pilot study is already able to point out potential future
development for the application of PROCOS within the CONOPS framework.
Probability of Probability of
correct task failed task Standard
Deviation
Mean Value Mean Value
7,0E-01
6,0E-01
5,0E-01 4,56E-01
4,0E-01
3,0E-01
2,0E-01
1,0E-01
3,06E-03 5,49E-03
0,0E+00
S1 S2 S3 S4
Scenarios
114
- Error! Reference source not found. -
Figure 5-1shows how the failure probability of the task increases as the conditions of the
scenario worsen. The relationship between the number of Contextual Conditions (CCs)
that affect the scenario and the value of the failure probability is not linear; namely, there
is not a linear dependency between the number of CCs and the failure probability, but the
increase from a scenario to another one depends on the weight of the Contextual
Conditions that play a role within those scenarios. Specifically, Figure 5-1 indicates that the
human and organisational factors (scenario 3) have a stronger impact than the external
contextual conditions (scenario 2) on the performance of the ATCO. Indeed, compared to
the base case (scenario 1), the failure probability of the task increases by two orders of
magnitude when the human and organisational factors negatively affect the ATCO, while
the order of magnitude of the failure probability remain the same also when all the
negative external conditions are considered.
One last consideration can be made regarding the overall failure probability of the task.
When the air traffic controller works in the best conditions (scenario 1), the failure
probability of the task is not zero but 10-3. This value might appear high but Figure 5-2
shows that the probability of irrevocable failure is only 10-6. Another failure end state
observed in the scenario1 is the error of omission of the ATCO in verifying that the pilot
has vacated the runway, but this situation it is not safety critical.
1,00E+00
4,04E-01 8,28E-01
Failure probability [failures/movement]
1,00E-01
1,00E-02
1,00E-03
8,51E-05
1,00E-04
1,00E-05
1,75E-06
1,00E-06
1,00E-07
S1 S2 S3 S4
ATCO doesn't issue the instruction.
Runway vacated - Delay confirmation by ATCO
Runway vacated - ATCO doesn't verify visually
Aircraft is obstructing the runway - Delay Understanding
Irrevocable failure
115
Figure 5-2 displays as the probability of occurrences of an irrevocable failure grows as the
work conditions worsen. According to what Figure 5-1 infers Figure 5-2 outlines that the
human and organisational factors are more impacting than the external contextual
conditions.
It can be observed that the failure end states displayed with the violet and orange colours
are reversed between scenario 2 and scenario 3. This behaviour is due to the different
way the two scenarios influence the operator. Excluding irrevocable failures, when the
external conditions play a role in the task execution, the tower runway controller is prone
to do more ghastly errors.
When the human and organisational factors affect the working conditions of the ATCO, a
wider spectrum of possible failure end states of the task has been registered. Indeed, the
simulation of scenarios 3 and 4 have recorded all kind of previously defined task failures.
In order to complete the analysis of the overall task, the probability of recovery actions
due to the occurrence of a non-irrevocable failure, has been studied. Figure 5-3/Figure 5-4
shows the results.
1,00,E+00
8,00,E-01
6,00,E-01
4,00,E-01
2,00,E-01
0,00,E+00
S1 S2 S3 S4
Recovery by procedure 6,04E-04 2,40E-03 5,05E-01 9,97E-01
Recovery by clarification 9,35E-01 1,00E+00 7,11E-01 1,15E-02
116
- Error! Reference source not found. -
Probability [failures/recovery]
8,E-01
6,E-01
4,E-01
2,E-01
0,E+00
S1 S2 S3 S4
Recovery by procedure 7,72E-04 3,09E-02 6,88E-01 9,97E-01
Recovery by clarification 2,18E-03 9,90E-02 5,90E-01 9,98E-01
Figure 5-3 specifies that also in the best work conditions, ATCO makes use of his recovery
skills to solve misunderstandings during communication with the pilot, if any. This means
that the iterated use of his ability to recover by clarification as feedback of a
communication process is normal. Furthermore, the probability of recovery failure is very
low in scenarios 1 and 2 (Figure 5-4); that is, if the human and organisational factors does
not affect the ATCO capabilities in performing recovery actions, the recovery will be
almost always correct.
Otherwise, when the human and organisational factors affect the performance of the
operator, the air traffic controller exploits the capability of recovery by procedure but the
probability of performing a correct recovery is very low because it is affected by operator
inner negative influencing factors.
117
5.2 Error type analysis
In this paragraph, the analysis of different error type will be presented.
1,0E-01
Error probability [errors/movement]
1,0E-02
1,0E-03
1,0E-04
1,0E-05
1,0E-06
S1 S2 S3 S4
ET Perception 0,00E+00 4,56E-05 8,71E-03 2,57E-03
ET Interpretation 1,04E-03 1,95E-03 2,68E-03 1,86E-05
ET Response/Execution 8,10E-04 2,19E-03 4,01E-02 1,22E-04
ET Communication 1,30E-06 8,43E-05 6,38E-01 9,94E-01
The probability figures of different error types depend on the calibration of each decision
block within the cognitive flowchart and on how often the specific error type can occur
within the task analysis as well.
Although, in general, the communication is very important in any task performed by air
traffic controllers, Figure 5-5 shows how much this is true for this Use Case. For the
scenario 2 the probability of error in communication is low because the probability of
correct recovery by clarification is very high.
The probability of having an interpretation error is almost invariable trough the scenarios
because the task of this Use Case does not comprise any relevant diagnostic or planning
process.
In general, the error types observed in the worst scenario focalize the attention on
communication problems; indeed the errors in communication prevail if compared with the
other error types that have a probability of occurrence of two order of magnitude less.
Looking at the scenarios 2 and 3 it is possible to observe that a wider spectrum of error
types might occur, arguing that intermediate situations are more difficult to manage – thus
to improve – than the boundary situations; indeed, if the aim is to work at the Contextual
Conditions of scenario 4 and to focalise the effort to improve the communication process,
118
- Error! Reference source not found. -
it would be sure huge vantages in safety are reached, i.e. a high rate of reduction of the
task failure probability. It is not exactly the same for scenarios 2 and 3.
When the complexity of interaction between ATCO and the context is lacking (scenario 1),
the error types committed by the air traffic controller are slips in execution of the task (e.g.
slips of the tongue) or high cognitive level mistakes (interpretation error).
Eventually, it could be said that the model ensures an estimation of the probability of error
types considering the dependency among them. Indeed, comparing the probability of error
in perception and in execution for the scenarios 3 and 4, it is possible to observe that,
given an increase in communication errors, the rate of decrease of errors in perception is
less than the one in execution because the ATCO reaches the execution phase of the
task less often.
2. Scenario Setting
a. Setting of critical Contextual Conditions (HERA) and assessment of CCs
importance (experts’ judgements)
b. Data gathering for technical and “external” events influencing the task
c. Defining the set of operational scenarios to be simulated
119
3. Calibration Process
a. HERA DB analysis and error type setting for single steps of the task
b. Calibration of the Operator Model of PROCOS with HERA dataset. Setting
the transfer function FLI(CCs) Æ HEP(FLI)
c. PROCOS model testing and validation
120
- Error! Reference source not found. -
- The approach is able to be adapted to many different fields of study with little
efforts (e.g. process industry, nuclear, railway).
121
many couples of (FLI, ri), modelling the relationship between ri and FLI through the
curve that better fits empirical observation points.
ri
FLI
P (0 ,..., cc i ,..., 0 )
... FLI1
FLIi
P (1 ,..., cc i ,..., 0 )
FLIn
...
P (1 ,..., cc i ,..., 1 )
Figure 5-6: Calibration function derived from a larger set of empirical observation
122
6. REFERENCES
[1]. Acosta CG , Siu N., Dynamic event tree analysis methods (DETAM) for accident
sequence analysis. MITNE-295, Cambridge, MA: Massachusetts Institute of
Technology, 1991.
[2]. Amalberti R. and Wioland L., ( 1997) Human error in aviation, In: Aviation safety,
pp. 91-108, H. Soekkha (Ed.).
[3]. Amendola A, Accident Sequence dynamic simulation versus event trees. Reliability
Engineering and System Safety 1988; 22:3-25
[4]. Blom H. A.P., Daams J. and Nijhuis H. B., Human cognition modelling in ATM
safety assessment, 3rd USA/Europe Air Traffic Management R&D Seminar Napoli,
13-16 June 2000.
[5]. Buck, S., Biemans, M.C.M., Hilburn, B.G., (1996) van Woerkom, P.Th.L.M.,
Synthesis of functions, NLR Report TR 97054 L.
[6]. Cacciabue P.C. and Hollnagel E.,( 1995) Simulation of Cognition: applications. In
J.M. Hoc, Cacciabue P.C. and E.Hollnagel (Eds), Expertise and Technology:
cognition and Human-computer Interaction. Lawrence Erbaum Associates,
Hillsdale, New Jersey pp55-73.
[7]. Cacciabue P.C.( 1998) Modelling and simulation of Human Behaviour in System
Control, Springer & Verlag, London.
[8]. Cacciabue, P. C., Decortis, F., Drozdowicz, B., Masson, M., and Nordvik, J. P.
(1992). "COSIMO: A Cognitive Simulation Model of Human Decision Making and
Behaviour in Accident Management of Complex Plants." IEEE Transaction on
Systems, Man and Cybernetics, IEEE-SMC, 22(5), 1058-1074.
[9]. Chang, Y.H. and A. Mosleh. Dynamic PRA Using ADS with RELAP5 Code as Its
Thermal Hydraulic Module. in Probabilistic Safety Assessment and Management
(PSAM) 4. 1998. New York: Sept. 13-18, 1998: Springer.
[10]. Chang, Y.H. and Mosleh A. , Cognitive Modeling and Dynamic Probabilistic
Simulation of Operating Crew Response to Complex System Accidents (ADS-
IDACrew). 1999, CTRS-B6-06, College Park, Maryland: Center for Technology
Risk Studies, University of Maryland.
123
[11]. Corker K.M. (1999) Human Performance Simulation In The Analysis Of Advanced
Air Traffic Management Proceedings of the 1999 Winter Simulation Conference P.
A. Farrington, H. B. Nembhard, D. T. Sturrock, and G. W. Evans, eds.
[12]. Corker, K. M., and Smith, B. (1993). "An architecture and modelling for cognitive
engineering simulation analysis: application to advanced aviation analysis." 9th
AAIA Conference on Computing in Aerospace, San Diego, CA, US.
[13]. Edwards E. Human Factors in Aviation Academic Press, San Diego, 1988 CA pp
3-25.
[14]. EUROCONTROL ATM Operational concept volume 2 Concept of Operation Year
2011 Edition 1 Brussels 03.05.2005. Proposed Released Issue.
[15]. Everdij M.H.C. , Blom H.A.P. and Klompstra M.B., (1997) Dynamically Coloured
Petri Nets for Air Traffic Management Safety purposes, Proc. 8th IFAC Symposium
on Transportation Systems, pp. 184-189.
[16]. Fujita Y., Hollnagel E., “Failures without errors: quantification of context in HRA”.
Reliability engineering and System Safety, Vol.83, pg.141 – 151, 2004.
[17]. George, P.H., Johnson, A.E., Hopkin, V.D.,(1973) Radar monitoring of parallel
tracks, automatic warning to controllers of track deviations in a parallel track
system, EEC Report No 67, Bretigny.
[18]. Hawkins F.H. Human Factors in flight Adershot UK. Gower Technical Press 1987.
[19]. Hollnagel E. ,(1993). “Human Reliability analysis, context and control”. Academic
Press, London.
[20]. Hooke N. Maritime Casualties 1963-1996 London LLP 1997.
[21]. IAEA “Report on the preliminary fact finding mission following the accident at the
nuclear fuel processing facility in Tokaimura, Japan” Austria November 1999.
[22]. IAEA/ WHO/EC “ Ten Years after Chernobyl: what do we really know?” based on
the proceeding of the IAEA/WHO/EC International Conference Vienna April 1996.
International Atomic Energy Agency Division of Public Information 1997.
[23]. Isaac A. Shorrock S., Kennedy R., Kirwan B, Andresen H. and Bove T. “The
Human Error in ATM Technique (HERA-JANUS) HRS/HSP-002-REP-03
EUROCONTROL Edition 1.0 21 Feb 03.
[24]. Isaac A. Shorrock S., Kirwan B. Human error in European air traffic management:
the HERA project. Reliability Engineering and System Safety Volume: 75, Issue: 2,
February, 2002, pp. 257-272.
124
[25]. Isaac, A., Straeter, O. & Van Damme, D. A method for predicting Human Error in
ATM (HERA Predict). HRS/HSP-002-REP-07. EUROCONTROL. Brussels 2004.
[26]. Kemeny J.G. (1979) “Report of the President Commission on the accident at Three
Mile Island, Washington DC: US Government Printing Office
[27]. Kontogiannis T. , “A framework for analysis of cognitive reliability I complex
systems: a recovery centred approach”, Reliabily Engineering and System Safety,
Vol.58 1997.
[28]. Kurzman D. A killing wind: Inside Union Carbide and the Bhopal Catastrophe.
McGraw Hill Book Company 1987.
[29]. Mauri C. Owen D., Baranzini D., Model of Human Machine Integrated System,
AITRAM Deliverable D04.1 WP4, 5th Framework Programme, October 2001.
[30]. Mosleh, A. and Y.H. Chang, Model-Based Human Reliability Analysis: Prospects
and Requirements. Reliability Engineering & System Safety, 2004. 83(2): p. 241-
253.
[31]. Mould R.F. Chernobyl Record: the definitive History of the Chernobyl Catastrophe
Bristol UK Philadelphia PA Institute of Physics Publishing 2000
[32]. Rasch G. Probabilistic Model for some Intelligence and Attainment Tests.
University of Chicago Press. Chicago 1980.
[33]. Rasmussen J & Vincente K.J. “Cognitive control of Human Activities: Implication
for Ecological Interface Design” RISO-M-2660 Roskilde Denmark; Riso National
Laboratory 1987.
[34]. Reason J. , Human error, Cambridge Univ. Press, 1990.
[35]. Robins J. “The World’s Greatest Disasters” London Chancellor Press 1990
[36]. Shu, Y., Futura, K., and Kondo, S. (2002). "Team performance modelling for HRA
in Dynamic situations." Reliability Engineering and System Safety, 78, 111-121.
[37]. Smidts, C., S.H. Shen, and A. Mosleh, The IDA Cognitive Model for the Analysis of
Nuclear Power Plant Operator Response Under Accident Condition. Part I:
Problem Solving and Decision Making Model. Reliability Engineering and System
Safety, 1997(55): p. 51-71.
[38]. Smith B. R. Tyler. S. W. (1997). “The Design and Application of MIDAS: A
Constructive Simulation for Human-System Analysis”. Presented at the 2nd
Simulation Technology & Training (SIMTECT) Conference, 17-20 March 1997,
Canberra, Australia.
125
[39]. State of Alaska “The Wreck of the Exxon Valdez Final Report” Alaska Oil Spill
Commission Published February 1990.
[40]. Straeter O.Evaluation of Human Reliability on the basis of Operational Experience.
GRS-170 Cologne (Germany)GRS 2000.
[41]. Sträter, O. Cognition and safety - An Integrated Approach to Systems Design and
Performance Assessment. Ashgate. Aldershot (2005).
[42]. Subsecretaria de Avicion Civil Spain KML B-747, PH-BUF and Pan AM B-747
N736 collision at Tenerife Airport Spain on 27 March 1977.
[43]. Swain, A.D., and Guttman, H.E. (1983). “Handbook on Human Reliability Analysis
with Emphasis on Nuclear Power Plant Application”. NUREG/CR-1278, SAND 08-
0200 R X, AN.
[44]. Takano K. , K. Sasou and S.Yoshimura (1995) Simulation system for behaviour of
an operating group (SYBORG). XIV European Annual Conference on Human
Decision Making and Manual Control. Deft, The Netherlands June 14-16.
[45]. Trucco P. , LevaM.C. “A Probabilistic Cognitive Simulator for HRA studies.
PROCOS”. Politecnico of Milan – Department of Management, Economics and
industrial Engineering. 14th of January 2005.
[46]. Trucco P., Leva M.C., Corti G., Gallarati G. “A Probabilistic Cognitive Simulator For
Hra Studies” CISAP 1 Conference Proceeding Palermo 2004.
[47]. Wickens C. “Engineering Psychology and Human Performance” Second Edition
New York; Harper-Collins 1992.
[48]. Wickens.C.R “Engineering, psychology and human performance”. Harper Collins
Publishers. 2nd edition New York 1992.
[49]. Woods D. D., Roth, E. M., and People, H. E. (1987). "Cognitive Environment
Simulation: an Artificial Intelligence System for Human Performance Assessment."
Technical Report NUREG-CR-4862, US Regulatory Commission, Washington DC
US.
126
ANNEX I : CONOPS USE CASE “HANDLE AIRCRAFT LANDING”
Scope
System, black-box. System means an Overall ATM/CNS Target Architecture compliant
system.
Level
User Goal
Summary
This Use Case describes how a Tower Runway Controller uses the System to control the
landing of an aircraft. It starts when the intermediary approach phase is completed and the
aircraft is ready for final approach and ends when the Tower Runway Controller is ensured
that the aircraft has vacated the runway.
Actors
Tower Runway Controller (Primary) – wants to make sure that the aircraft lands and safely
vacates the runway.
Pilot (Support) – has to land the aircraft safely.
Executive Controller (Offstage) – has to take control of the aircraft back from the Tower
Runway Controller in case of a missed approach and wants to be informed of any runway
closures.
Multi-Sector Planner/Planning Controller (Offstage) – has to assist the Executive Controller
when handling the missed approach.
Tower Ground Controller (Offstage) – has to assume responsibility for the control of the
aircraft right after vacating the runway.
Tower Supervisor (Offstage) – wants to make it sure that runways are used according to the
airport’s traffic management policy.
ACC Supervisor (Offstage) - wants to be informed of any runway closures longer than a
specified period.
Flow Manager (Offstage) – wants to be informed of any runway closures longer than a
specified period which will affect traffic flows.
Preconditions
The flight is cleared for final approach by the Executive Controller in charge of establishing
the aircraft on final approach. The transfer of responsibility between the Executive Controller
and the Tower Runway Controller is completed. In particular the Communication contact
(voice) between the Tower Runway Controller and the Pilot is established. The System has
informed the pilot via data link on the runway traffic situation and weather at airport (e.g.
wind).
The System knows the planned runway exit for the aircraft.
127
Post conditions
Success end state
The System records that the aircraft has vacated and is no longer obstructing the
runway and communications have been transferred to the Tower Ground Controller
for the arrival taxi.
Failed end state
1. The System records that the landing has been aborted. The System knows the
current flight status e.g. either resuming a landing sequence or returned to the holding
area.
2. The System records that the aircraft has not vacated and is obstructing the runway.
Notes
Definitions
Definitions of the following acronyms and expressions used in this document “Arrival
taxi plan, initial approach, intermediary approach, final approach1, landing clearance”
are available in the OATA Glossary document.
Trigger
The Use Case starts when the System detects that the aircraft is on final approach.
Main Flow
1. The System notifies the Tower Runway Controller of the planned runway exit and
proposes to the Pilot the runway exit and associated taxi-in plan2 .
2. The Pilot confirms the proposed runway exit and associated taxi-in plan.
3. The Tower Runway Controller, using the System, verifies that the runway is available
for the landing of the aircraft.
4. The Tower Runway Controller issues the landing clearance using R/T.
5. The Pilot lands the aircraft. The System detects the landing3 and records the landing
time4.
6. The Tower Runway Controller, assisted by the System, verifies that the aircraft has
vacated the runway.
7. The System detects that the aircraft has vacated the runway via the planned exit.
Communications are transferred to the Tower Ground Controller.
8. The Use Case ends when the System records that the aircraft has safely vacated the
runway.
Alternative Flows
[2] – The Pilot rejects the planned Runway Exit.
1
The definitions of the phases of approach are covered by ICAO doc. 4444.
2
See use case taxi-in of an aircraft
3
Landing means that the aircraft remains on the surface.
4
The system informs the Tower Runway Controller, Aircraft Operator, Airport Operator and Flow Manager of the event.
128
9. The Pilot requests a runway exit other than planned from the Tower Runway Controller
by R/T.
10. The Tower Runway Controller agrees to the request and updates the runway exit using
the System.
11. The System confirms to the Pilot the runway exit and associated taxi-in plan5 using D/L.
12. The flow continues at step 2.
[7] – The Pilot vacates by a Runway Exit other than Planned
13. The System detects and notifies the Tower Ground and Tower Runway Controllers that
the aircraft does not use the planned runway exit and informs both controllers of the
actual runway exit. The System transfers communications to the Tower Ground
Controller using D/L.
14. The flow continues at step 8.
Failure Flows
[4] – The Runway is not Available (e.g. due to an Aborted Take-off).
15. The Tower Runway Controller, assisted by the System, is unable to issue a landing
clearance. The Tower Runway Controller instructs the Pilot to execute a missed
approach, notifies the System of the missed approach and instructs the Pilot by R/T to
contact the Executive Controller.
16. The Use Case ends when the System records that the aircraft has not landed.
[5] – The Pilot is unable to land.
17. The Pilot informs the Tower Runway Controller that he is unable to land and requests a
missed approach clearance.
18. The Tower Runway Controller instructs the Pilot to execute a missed approach, notifies
the System of the missed approach and instructs the Pilot by R/T to contact the
Executive Controller. .
19. The Use Case ends when the System records that the aircraft has not landed.
[6] – The Pilot does not Manage to Vacate the Runway
20. The Tower Runway Controller, assisted by the System, detects that the aircraft is
obstructing the runway.
21. The Tower Runway Controller confirms with the Pilot that he has not vacated the
runway and notifies the System that the runway is obstructed for a defined period.
The Use Case ends when the System disseminates the runway obstruction information to the
upstream ACC(s) Supervisor(s) and Tower Supervisor, the concerned Executive
5
See use case taxi-in of an aircraft
129
ANNEX II A: TASK ANALYSIS FOR USE CASE “HANDLING AIRCRAFT LANDING” IN FLOW CHART FORMAT
130
131
ANNEX II B: TASK ANALYSIS FOR USE CASE “HANDLING AIRCRAFT LANDING” IN TABLE FORMAT
The flight has been cleared for final approach by Executive Controller. The transfer of responsibility between Executive Controller and Tower Runway Controller is completed between the Executive Controller and the
1 Tower Runway Controller is completed. Communication between the Tower Runway Controller and the Pilot is established
object on
e1 runway yes: e2 no: e9
visibility is
e9 good yes: ta16 no: ta17
ATCO verifies
visually runway Warning
availability and Not clearanc
issues landing done: Not done: e plan:
ta16 clearance e10 ta17 ta17 ta24
ATCO
verifies,using
the radar, Not
runway done: Warning
availability and (Warnin Other clearanc
issues landing g / Error) than: e plan:
ta17 clearance e10 ta18 ta24 ta24
ATCO issues Slip of the
the landing tongue:
ta18 clearance e10 e10
the pilot rejects
the planned
exit and
requests a
e10 different one yes: ta19 no: tp5
clarifica
tion incor
NON rect
ok: read
readbac Irrevoc back
readback of k critical clarifica able warn
landing warning: tion ok: Failure ing:
tp5 clearance ta23 ta23 e12 Exit ta23
132
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
hearba
ck
commu hearba
hearbac nication ck
k error irrevoca
warning (warnin ble
error: g): failure:
ta23 hearback e12 e12 e12 Exit
ATCO
ATCO Mishear
understands d comm:
ta19 request ta20 tpa6
Incorrec
t
clarificat
ion
clarification (warning
tpa6 process ta20 ): ta20
Wrong
clearanc
Slip of the e plan:
tongue Irrevocab
ATCO process (warning): le failure
ta20 pilot request e11 e11 Exit
ATCO agrees
e11 to pilot request yes: tp3 no: tp4
clarifica
tion incor
NON rect
ok: read
readbac Irrevoc back
k critical clarifica able warn
warning: tion ok: Failure ing:
tp3 readback ta21 ta21 e12 Exit ta21
hearba
ck
commu hearba
hearbac nication ck
k error irrevoca
warning (warnin ble
error: g): failure:
ta21 hearback e12 e12 e12 Exit
clarifica
tion incor
NON rect
ok: read
readbac Irrevoc back
k critical clarifica able warn
warning: tion ok: Failure ing:
tp4 readback ta22 ta22 e12 Exit ta22
133
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
hearba
ck
commu hearba
hearbac nication ck
k error irrevoca
warning (warnin ble
error: g): failure:
ta22 hearback e12 e12 e12 Exit
Plane A
technically
e2 able to vacate yes: tp1 no: tp2
Pilot A aware
of
failure(unable
to vacate
runway)
comunicates
the problem to Not done:
tp2 ATCO ta9 tpa5
clarificat
ion non
OK: Not
Done
tpa5 audio check A e8 e8*
visibility is
e8 good yes: ta14 no: ta15
ATCO verifies Not done: Opposit Other
visually runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Error/Warn able Irrevocab
missed (No Failure ing failure le failure
ta14 approach tp6 simulation) Exit ta15 Exit Exit
ATCO verify
using the Not done: Opposit Other
radar, runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Irrevocabl able Irrevocab
missed (No Failure e Failure failure le failure
ta15 approach tp6 simulation) Exit Exit Exit Exit
visibility is yes:
e8* good ta14* no: ta15*
ATCO verifies Not done: Opposit Other
visually runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Error/Warn able Irrevocab
ta14 missed (No Failure ing failure le failure
* approach tp6 simulation) Exit ta15* Exit Exit
134
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
ATCO verifies
using the Not done: Opposit Other
radar, runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Irrevocabl able Irrevocab
ta15 missed (No Failure e Failure failure le failure
* approach tp6 simulation) Exit Exit Exit Exit
Not
ATCO heard
understands comm:
ta9 communication ta24 tpa4
clarificat
ion non
OK: Not
Done
tpa4 audio check A e7 e7*
visibility is
e7 good yes: ta12 no: ta13
ATCO verifies Not done: Opposit Other
visually runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO ( able Error/Warn able Irrevocab
missed No Failure ing failure le failure
ta12 approach tp6 simulation) Exit ta13 Exit Exit
ATCO verifies
using the Not done: Opposit Other
radar, runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Irrevocabl able Irrevocab
missed (No Failure e Failure failure le failure
ta13 approach tp6 simulation) Exit Exit Exit Exit
visibility is yes:
e7* good ta12* no: ta13*
ATCO verifies Not done: Opposit Other
visually runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Error/Warn able Irrevocab
ta12 missed (No Failure ing failure le failure
* approach tp6 simulation) Exit ta13* Exit Exit
ATCO verifies
using the Not done: Opposit Other
radar, runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Irrevocabl able Irrevocab
ta13 missed (No Failure e Failure failure le failure
* approach tp6 simulation) Exit Exit Exit Exit
ATCO issues
the missed
approach Later than:
ta24 clearance tp6 e12
135
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
yes:
Irrevoca
Pilot lands ble
e12 aircraft anyway Failure no: tp6
clarifica
clarifica tion incor
tion ok: NON rect
END ok: read
Readback of readbac TASK: Irrevoc back
missed k critical Aircraft able warn
approach warning: B is not Failure ing:
tp6 clearance ta25 ta25 landed Exit ta25
hearba
ck
commu
hearbac nication
k error
warning (warnin
error: g): hearba
END END END ck
TASK: TASK : TASK : irrevoca
Aircraft B Aircraft Aircraft ble
is not B is not B is not failure:
ta25 hearback landed landed landed Exit
Pilot A aware
of his position, Slip of
delivers the
vacation Not done: tongue: Other
tp1 confirmation ta1 tpa3 tpa2 Than: tpa2
Clarific
Clarifica ation
tion Not Other
done: than:
tpa2 audio check B e5 e5* e5**
Clarific
Clarifica ation
tion Not Other
done: than:
tpa3 audio check A e5 e5* e5**
visibility is
e5 good yes: ta7 no: ta8
ATCO verifies Not done: Other
visually runway Pilot than: Other
unavailability recalls Slip of Not done: Irrevoc than:
and issues ATCO the Error/Warn able Irrevocab
missed (No tongue: ing failure le failure
ta7 approach tp6 simulation) tp6 ta8 Exit Exit
136
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
ATCO verifies
using the
radar, runway Other
unavailability Slip of Not done: than:
and issues the Irrevocabl Irrevocab
missed tongue: e Failure le failure
ta8 approach tp6 tp6 Exit Exit
visibility is
e5* good yes: ta7* no: ta8*
ATCO verifies Not done: Other
visually runway Pilot than: Other
unavailability recalls Slip of Not done: Irrevoc than:
and issues ATCO the Error/Warn able Irrevocab
missed (No tongue: ing failure le failure
ta7* approach tp6 simulation) tp6 ta8* Exit Exit
ATCO verifies
using the
radar, runway Other
unavailability Slip of Not done: than:
and issues the Irrevocabl Irrevocab
missed tongue: e Failure le failure
ta8* approach tp6 tp6 Exit Exit
visibility is yes:
e5** good ta7** no: ta8**
ATCO verifies Not done: Other
visually runway Pilot than: Other
unavailability recalls Slip of Not done: Irrevoc than:
and issues ATCO the Error/Warn able Irrevocab
missed (No tongue: ing failure le failure
ta7** approach tp6 simulation) tp6 ta8** Exit Exit
ATCO verifies
using the
radar, runway Other
unavailability Slip of Not done: than:
and issues the Irrevocabl Irrevocab
missed tongue: e Failure le failure
ta8** approach tp6 tp6 Exit Exit
Not
ATCO heard
understands comm:
ta1 communication e3 tpa1
Clarifica
tion Not
done:
tpa1 Audio check A e4 e4*
visibility is
e4 good yes: ta5 no: ta6
137
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
ATCO verifies
visually runway Slip of
availability and the Not done:
issues landing Opposite: tongue: Warning
ta5 clearance e10 ta24 e10 ta6
ATCO verify
using the
radar, runway Slip of
availability and the
issues landing Opposite: tongue:
ta6 clearance e10 ta24 e10
visibility is
e4* good yes: ta5* no: ta6*
ATCO verifies
visually runway Slip of
availability and the Not done:
issues landing Opposite: tongue: Warning
ta5* clearance e10 ta24 e10 ta6*
ATCO verify
using the
radar, runway Slip of
availability and the
issues landing Opposite: tongue:
ta6* clearance e10 ta24 e10
visibility is
e3 good yes: ta2 no: ta3
ATCO verifies
visually runway
availability and Not done:
issues landing Warning
ta2 clearance e10 ta3
ATCO verify
using the
radar, runway
availability and Slip of the Not done:
issues landing tongue: Warning /
ta3 clearance e10 e10 Error: ta4
ATCO issues
Landing
clearance Slip of the
without tongue:
ta4 verification e10 e10
No:
Pilot
informs
ATCO and
requests
missed
Pilot B is able Yes: approach
e12 to land? tp11 (No
138
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
simulation)
Other
Than:
Irrevocabl
pilot B lands e failure
tp11 the aircraft e13 Exit
plane B
technically
e13 able to vacate yes: tp7 no: tp8
Pilot B aware
of
failure(unable
to vacate
runway)
comunicates
the problem to Not done:
tp7 ATCO ta26 tpa7
END
TASK:
Aircraft B
is Clarifica
obstructi tion Not
ng the done:
tpa7 audio check B runway e14
visibility is
e14 good yes: ta24 no: ta25
END Wrong
TASK: Other clearanc
ATCO detects Aircraft B than: e
visually that is Not done: Not done Irrevoc planning:
the aircraft B is obstructi Irrevocabl (warning / ablr Irrevocab
obstructing the ng the e Failure Error): ta failure le failure
ta24 runway runway EXIT 25 EXIT EXIT
END Wrong
ATCO detects TASK: Other clearanc
using the Aircraft B than: e
radar, that the is Not done: Not done : Irrevoc planning:
aircraft B is obstructi Irrevocabl Irrevocabl ablr Irrevocab
obstructing the ng the e failure e failure failure le failure
ta25 runway runway EXIT EXIT EXIT EXIT
END
TASK: ATCO
ATCO Aircraft B mishear
understands is d comm:
ta26 communication obstructi tpa8
139
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
ng the
runway
Delay
underst
anding
comm:
END END
TASK: TASK:
Aircraft B Aircraft
is B is
obstructi obstructi
clarification ng the ng the
tpa8 process runway runway
pilot B, aware
of his position,
vacates Other Other
tp8 runway tp10 Than: tp9 than:tpa9
END Clarific
TASK Clarifica ation
(Delay tion Not Other
confirmat done: than:
tpa9 audio check B ion) e15 e15*
visibility is
e15 good yes: ta27 no: ta28
END
TASK
(aircraft
ATCO verifies B has
visually that vacated Other
the aircraft B runway than:
has vacated by the Not done Irrevoc
runway by the exit other (Worning / able
exit other than than Error): failure
ta27 planned planned) ta28 EXIT
END
TASK
ATCO verifies (aircraft
using the B has
radar, that the vacated
aircraft B has runway
vacated by the Not done:
runway by the exit other Irrevocabl
exit other than than e failure
ta28 planned planned) EXIT
visibility is yes:
e15* good ta27* no: ta28*
140
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
END
TASK
(aircraft
ATCO verifies B has
visually that vacated Other
the aircraft B runway than:
has vacated by the Not done Irrevoc
runway by the exit other (Worning / able
ta27 exit other than than Error): failure
* planned planned) ta28* EXIT
END
TASK
ATCO verifies (aircraft
using the B has
radar, that the vacated
aircraft B has runway
vacated by the Not done:
runway by the exit other Irrevocabl
ta28 exit other than than e failure
* planned planned) EXIT
Error in
pilot recovery Error in localizati
of awareness Detection: on:
tp9 of his position ta33 tp10 tpa10
END Clarific
TASK Clarifica ation
(Delay tion Not Other
tpa1 confirmat done: than:
0 audio check B ion) e16 e16*
visibility is
e16 good yes: ta29 no: ta30
END
TASK
(aircraft
ATCO verifies B has
visually that vacated Other
the aircraft B runway than:
has vacated by the Not done Irrevoc
runway by the exit other (Worning / able
exit other than than Error): failure
ta29 planned planned) ta30 EXIT
END
TASK
ATCO verifies (aircraft
using the B has
radar, that the vacated
aircraft B has runway
vacated by the Not done:
runway by the exit other Irrevocabl
exit other than than e failure
ta30 planned planned) EXIT
141
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
visibility is yes:
e16* good ta29* no: ta30*
END
TASK
(aircraft
ATCO verifies B has
visually that vacated Other
the aircraft B runway than:
has vacated by the Not done Irrevoc
runway by the exit other (Worning / able
ta29 exit other than than Error): failure
* planned planned) ta30* EXIT
END
TASK
ATCO verifies (aircraft
using the B has
radar, that the vacated
aircraft B has runway
vacated by the Not done:
runway by the exit other Irrevocabl
ta30 exit other than than e failure
* planned planned) EXIT
pilot B Slip of
communicates the Other
vacation Not done: tongue: than:
tp10 confirmation ta33 tpa12 tpa11 tpa11
Clarific
Clarifica ation
tion Not Other
tpa1 done: than:
1 audio check B e19 e17 e17*
Clarific
Clarifica ation
tion Not Other
tpa1 done: than:
2 audio check A e19 e17 e17*
visibility is
e17 good yes: ta31 no: ta32
END
TASK
ATCO verify (aircraft END
visually that B has TASK
the Aircraft B safely (Delay
has safely vacated Confirm Not done:
ta31 vacate runway runway) ation) ta32
ATCO verify END
using the TASK END END
radar, that the (aircraft TASK TASK
Aircraft B has B has (Delay (Delay
safely vacate safely Confirm Confirmati
ta32 runway vacated ation) on)
142
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
runway)
visibility is yes:
e17* good ta31* no: ta32*
END
TASK
ATCO verify (aircraft END
visually that B has TASK
the Aircraft B safely (Delay
ta31 has safely vacated Confirm Not done:
* vacate runway runway) ation) ta32*
END
ATCO verify TASK
using the (aircraft END END
radar, that the B has TASK TASK
Aircraft B has safely (Delay (Delay
ta32 safely vacate vacated Confirm Confirmati
* runway runway) ation) on)
ATCO
Not
ATCO heard
understands comm:
ta33 communication e19 tpa13
Clarifica
tion Not
tpa1 done:
3 audio check A e18 e18*
visibility is
e18 good yes: ta34 no: ta35
END
TASK
ATCO verify (aircraft END
visually that B has TASK
the Aircraft B safely (Delay
has safely vacated Confirm Not done:
ta34 vacate runway runway) ation) ta35
END
ATCO verify TASK
using the (aircraft END END
radar, that the B has TASK TASK
Aircraft B has safely (Delay (Delay
safely vacate vacated Confirm Confirmati
ta35 runway runway) ation) on)
visibility is
e19 good yes: ta36 no: ta37
143
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
Other
than
(Warnin
g):
Not END
END done(War TASK
TASK ning): END (aircraft
ATCO verify (aircraft TASK B has
visually that B has (aircraft B safely
the Aircraft B safely has safely Not vacated
has safely vacated vacated done: Not done: runway
ta36 vacate runway runway) runway) ta37 ta37 )
Not
done(W
Not arning): Not
END done(War END done(War
ATCO verify TASK ning): END TASK ning): END
using the (aircraft TASK (aircraft TASK
radar, that the B has (aircraft B B has (aircraft B
Aircraft B has safely has safely safely has safely
safely vacate vacated vacated vacated vacated
ta35 runway runway) runway) runway) runway)
144
ANNEX III: Cognitive Flowcharts Used Within The Simulator PROCOS As
Validated For ATC Applications
The Cognitive Flow chart is mainly constituted by three sub elements. The Hardware stimuli
Flow chart, it describe the cognitive process of action for those task that are triggered by HMI
stimuli or external environmental stimuli in general. The following Flow chart is the
communication Flowchart and it is tailored on the human action whose main triggering
element and main outcome are communication process. There is a link between the fist part
of the communication flow chart and other parts of the HW stimuli flow chart (since some
actions triggered by human communication then proceed like those action triggered by
hardware stimuli). The last figure in annex III present the process of Recovery. Recovery
process follow three main phases:
- Error in Identification (Perception that some thing went wrong, either by Hardware
stimuli or b external communication)
- Error in Localisation (Identification of where the error occurred and process of “pattern
recognition” that can facilitate the identification of the problem)
- Error in Correction,(The actual planning and carrying out of the corrective action)
The recovery Cognitive Flowchart is triggered by the simulator every time an equipment is in
a state diverging from the expected state, and this diverging position is detectable. The
correction can have a positive outcome only if the hardware failure has been labelled as
recoverable by the analyst.
145
Edition Number: Final Draft Page 146
147
148