Conference PaperPDF Available

Modelling of Human Performance Related Hazards in ATM

Authors:

Abstract

Existing approach towards agent-based safety risk analysis in Air Traffic Management (ATM) covers hazards that may potentially occur within air traffic operations along two ways. One way is to cover hazards through agent-based model constructs. The second way is to cover hazards through bias and uncertainty analysis in combination with sensitivity analysis of the agent-based model. The disadvantage of the latter approach is that it is more limited in capturing potential emergent behaviour that could be caused by unmodelled hazards. The research presented in this paper explores to what extent agent-based model constructs that have been developed for other purposes, are capable to model more hazards in ATM through the agent-based framework. The focus is on model constructs developed by VU Amsterdam, mainly addressing human factors and interaction between multiple (human and computer) agents within teams. Inspired by a large data base of ATM hazards analysed earlier, a number of VU model constructs are identified that have the potential to model remaining hazards. These model constructs are described at a conceptual level, and analysed with respect to the extent at which they increase the percentage of modelled hazards in ATM.
Air Transport and Operations Symposium 2012
1
Modelling of Human Performance-
Related Hazards in ATM
Tibor Bosse
1
, Alexei Sharpanskykh
2
, Jan Treur
3
VU University Amsterdam, De Boelelaan 1081, 1081 HV Amsterdam, The Netherlands
{tbosse, sharp, treur}@few.vu.nl
Henk A.P. Blom
4
and Sybert H. Stroeve
5
National Aerospace Laboratory NLR, Anthony Fokkerweg 2, 1059 CM Amsterdam, The Netherlands
{blom, stroeve}@nlr.nl
Existing approach towards agent-based safety risk analysis in Air Traffic
Management (ATM) covers hazards that may potentially occur within air
traffic operations along two ways. One way is to cover hazards through
agent-based model constructs. The second way is to cover hazards through
bias and uncertainty analysis in combination with sensitivity analysis of
the agent-based model. The disadvantage of the latter approach is that it is
more limited in capturing potential emergent behaviour that could be
caused by unmodelled hazards. The research presented in this paper
explores to what extent agent-based model constructs that have been
developed for other purposes, are capable to model more hazards in ATM
through the agent-based framework. The focus is on model constructs
developed by VU Amsterdam, mainly addressing human factors and
interaction between multiple (human and computer) agents within teams.
Inspired by a large data base of ATM hazards analysed earlier, a number of
VU model constructs are identified that have the potential to model
remaining hazards. These model constructs are described at a conceptual
level, and analysed with respect to the extent at which they increase the
percentage of modelled hazards in ATM.
I. Introduction
ATM is a joint cognitive system in which a large variety of human and technical agents interact
with each other
22
. Thanks to these interactions, they jointly cope in an intelligent manner with the
various disturbances that may be caused by the environment. For example, an unexpected weather
front or thunderstorm may trigger a sequence of activities that ripples through several agents in the
joint cognitive system, all with the aim to establish an effective and safe re-organisation of the air
traffic.
The scientific discipline which studies the design of such intelligent joint cognitive systems is
known under the name Resilience Engineering
21,23
. Resilience indicates that operations and
organisations are able to resist a wide variety of demands within their domains and thus should be
able to recover from any condition in their domains that may disturb the stability of the operation or
organisation. This implies that resilience engineering aims to address a wide range of nominal and
non-nominal conditions. Resilience engineering has some common grounds with safety analyses,
where non-nominal conditions often are referred to as hazards. In spite of this commonality between
resilience engineering and traditional safety analysis, there also are two significant differences:
1
Assistant Professor, Department of Computer Science, VU University Amsterdam.
2
Postdoc Researcher, Department of Computer Science, VU University Amsterdam.
3
Full Professor, Department of Computer Science, VU University Amsterdam.
4
Full Professor, Chair ATM Safety, Dept. of Aerospace Engineering, Delft University of Technology;
Principal Scientist, Air Transport Safety Institute, National Aerospace Laboratory NLR.
5
Senior Scientist, Air Transport Safety Institute, National Aerospace Laboratory NLR.
Air Transport and Operations Symposium 2012
2
1. Resilience engineering emphasises much more the potential ways human agents in the joint
cognitive system can respond in a flexible way to the various nominal and non-nominal
conditions, rather than quantifying safety risks of various non-nominal conditions.
2. Focus of traditional safety analysis (e.g., Ref. 13) is on hazards that can be analysed using
linear causation mechanisms (e.g. fault/event trees); the consequence of which is that a
significant part of nominal and non-nominal conditions tend to fall out of sight.
The flexibility of human responses is especially important to respond well when the air traffic
situation evolves into a condition for which the procedures are no longer unambiguous. From a
resilience engineering perspective this means that we should find out what these kind of non-nominal
conditions are and how humans anticipate upon their potential evolution from a nominal condition
into a non-nominal condition.
For a joint cognitive system as complex as ATM is, resilience engineering is at an early stage of
development. During recent years novel psychological model constructs have been studied in
capturing human cognition and its interaction with other joint cognitive system entities
21,23
. The
results currently obtained also demonstrate that there are non-psychological challenges when it
comes to a systematic analysis of the combinatorially many potential behaviours that may stem from
external and internal events and the subsequent interactions between the various entities in the joint
cognitive system.
To support a more systematic analysis, the MAREA (Mathematical Approach towards Resilience
Engineering in ATM) project aims to develop a mathematical modelling and analysis approach for
resilience engineering in ATM. As a first step, in a previous phase of the project, a large data base of
hazards in ATM has been identified, including ways that pilots and controllers deal with them. In
addition, a systematic analysis has been performed to assess which hazards from the set are and
which are not modelled by existing multi-agent model constructs of the TOPAZ (Traffic Organization
and Perturbation AnalyZer) safety assessment methodology
1,2
. This analysis
37
indicates that 58% of
the hazards in the ATM Hazard Database are modelled well, 11% are partly modelled, and 30% of
the hazards are not modelled. Although the TOPAZ approach evaluates the unmodelled hazards
through bias and uncertainty analysis
14
, this limits the capability in identifying emergent behaviour
that would be due to unmodelled hazards.
In the next phase of the project, which is the main topic of the current paper, the analysis
mentioned above is used as input for a follow-up analysis. The main goal of this second analysis is to
explore whether the percentage of hazards from the database that are modelled can be
(substantially) increased by taking other agent-based model constructs into account that have been
developed outside the air transport safety analysis domain. The focus in this analysis is on model
constructs that have been (co-)developed by the Agent Systems research group at VU University
Amsterdam and are based on a large body of literature in the areas of human factors and agent-
based modelling. In total, in this paper 11 novel model constructs are identified, which address
human factors topics such as trust, visual attention, and handling confusing information in
maintaining situation awareness and decision making. For each model construct, a high level
description is provided of the concepts that play a role in the model as well as the functioning of the
model. Moreover, by an informal comparison, the 11 model constructs are matched against all
hazards in the database. During this comparison, for each hazard-model construct combination, the
nature of the hazard in question is studied and by performing ‘mental simulation’ (i.e., imagining that
the model construct in question is actually executed) the analyst assesses whether this ‘simulation’ is
able to reproduce the hazard.
This paper is structured as follows. Section 2 provides a brief, high level description for all of the
existing VU-model constructs that are identified as being relevant for agent-based modelling of
remaining hazards. Section 3 presents an analysis of the extent to which the TOPAZ and VU-model
constructs are capable of modelling the hazards from the database. Section 4 concludes the paper
with a discussion and an overview of follow-up research.
II. Overview of VU-Model Constructs
The focus of the analysis described in this paper is on model constructs that have been developed
according to the multi-agent modelling and analysis approach used by the Agent Systems research
group at VU University Amsterdam
4,5
. This choice is motivated by the fact that the VU approach is
well in line with the multi-agent view for safety analysis that is advocated in the MAREA project
37
,
Air Transport and Operations Symposium 2012
3
which is reflected in the earlier phase of the project (in which a list of model constructs was identified
that follow the multi-agent DRM modelling paradigm of the TOPAZ methodology
1,2
). Similar to multi-
agent DRM modelling, the VU approach also takes an agent-based perspective, where an agent may
represent any autonomous entity (i.e., both a human and a technical system) that is able to make
decisions and interact with its environment by communication, observation and actions. The approach
attempts to describe the dynamics of agent systems by sequences of states of the world over time,
and expresses the basic mechanisms that lead to those sequences as causal/temporal relations in
some formal language (see below for more details). Furthermore, the VU Agent Systems research
group has extensive experience in developing dynamical computational models for human-related
processes that play a role within socio-technical systems (including ATM), such as decision making,
situational awareness, and visual attention. These computational models are based on a large body of
literature in the areas of human factors and agent-based modelling. For this reason, the focus of the
current work package is on model constructs developed by this research group.
In this section, existing model constructs (co-)developed by VU that were identified as being
applicable in the current project, are described at a conceptual level. Note that, although the model
constructs are only presented in an informal notation here, each of them has a precise formal
definition, which can be found in the corresponding paper(s). In principle, all of the presented models
have been developed on the basis of a standard methodology developed by the Agent Systems group
at VU Amsterdam. This methodology is inspired by the assumption that real world processes can be
described by sequences of states of the world over time, and that the basic mechanisms that lead to
those sequences can be expressed as causal/temporal relations in some formal language. To this end,
to formalise the model constructs, the generic modelling language (and software environment) TTL
4
(Temporal Trace Language) and its executable sublanguage LEADSTO
5
(Language and Environment
for Analysis of Dynamics by SimulaTiOn) are often used. These languages are hybrid in the sense that
they combine logical/qualitative aspects (e.g., as in rule-based systems) with mathematical/quan-
titative aspects (e.g., as in dynamical systems/differential equation modelling). They also allow the
modeller to represent probabilistic aspects. In the following paragraphs, these formal details have
been left out, since their main aim is to provide a high-level overview of the concepts and
relationships that play a role in the selected model constructs. For more technical details of the
languages TTL and LEADSTO, see Ref. 4 and 5, respectively.
In the following sub-sections, model constructs are presented for 11 processes related to human
behaviour in ATM that were considered useful within MAREA.
A. Bottom-up Attention
This model construct
8
describes the development of a human’s state of attention over time. The
model construct has been developed in the context of operators that have to perform demanding
tasks in dynamic environments. The human’s ‘state of attention’, which roughly refers to all objects in
the environment that are in the person’s working memory at a certain time point, is defined as a
distribution of values over different locations in the environment. The model continuously calculates
this state of attention based on a number of factors, including the person’s gaze direction, the
locations of the objects involved, and the characteristics of these objects (such as their brightness
and size). More precisely, the extent to which a location attracts attention is calculated as the
weighted sum of the attention-drawing characteristics of all objects at the location divided by the
Euclidean distance between the location and the point of gaze. Next, for each location, this level of
attraction is combined with the old level of attention, in order to determine the new level of attention
for that location. In addition, a decay mechanism ensures that attention gradually decreases over
time for locations that are not observed.
B. Experienced-Based Decision Making
For the process of experience-based decision making, two model constructs have been identified,
which are described in Ref. 34 and 36 respectively.
The first model construct
34
is based on the Expectancy Theory by Vroom, as described in Ref. 31.
According to this theory, when a human evaluates alternative possibilities to act, he or she explicitly
or implicitly makes estimations for the following factors:
valence
V{i}
, expectancy
E{i,j} and
instrumentality
I{i,j,k}. The estimation consists of: first determining the outcomes of an action
alternative, and then evaluating how desirable are these outcomes for the agent (i.e., evaluation
Air Transport and Operations Symposium 2012
4
w.r.t. the agent’s goals). For example, reporting of a safety incident may lead to the team’s approval,
which is desired by the agent.
Expectancy refers to the individual’s belief about the likelihood that a particular act will be followed
by a particular outcome (called a first-level outcome). Its value varies between 0 and 1.
Instrumentality is a belief concerning the likelihood of a first level outcome resulting into a particular
second level outcome; its value varies between -1 and +1. Instrumentality takes negative values
when a second-level outcome has a negative correlation with a first-level outcome. A second level
outcome represents a desired (or avoided) state of affairs that is reflected in the agent’s needs. In
the proposed approach the original expectancy model is refined by considering specific types of
individual needs described above. Valence refers to the strength of the individual’s desire for an
outcome or state of affairs; it is also an indication of the priority of needs. Values of expectancies,
instrumentalities and valences change over time, in particular due to individual and organisational
learning.
The second model construct
36
is based on the
simulation hypothesis
proposed by Hesslow
17
.
Hesslow argues that emotions may reinforce or punish simulated actions, which may transfer to overt
actions, or serve as discriminative stimuli. To realise this idea, Damasio’s
Somatic Marker Hypothesis
11
was adopted. This hypothesis provides a central role in decision making to emotions felt. Within a
given context, each represented decision option induces (via an emotional response) a feeling which
is used to mark
the option. For example, a strongly negative somatic marker linked to a particular
option occurs as a strongly negative feeling for that option. Similarly, a positive somatic marker
occurs as a positive feeling for that option. To realise the somatic marker hypothesis in behavioural
chains, emotional influences on the preparation state for an action are defined. Through these
connections emotions influence the agent’s readiness to choose the option. From a neurological
perspective, the impact of a sensory state to an action preparation state via the connection between
them in a behavioural chain will depend on how the consequences of the action are felt emotionally.
C. Operator Functional State
The operator functional state (OFS) model construct
3
has also been developed in the context of
operators that have to perform demanding tasks in dynamic environments. The model determines a
person’s
functional state
as a function of task properties and personal characteristics. The model is
based on two different theories: (1) the cognitive energetic framework
19
, which states that effort
regulation is based on human recourses and determines human performance in dynamic conditions,
and (2) the idea, that when performing sports, a person’s generated power can continue on a
critical
power
level without becoming more exhausted
18
. The FS of a human represents a dynamical state of
the person. In the model, this is defined by a combination of factors, including exhaustion, motivation
and experienced pressure, but also the amount of generated and provided effort. All of these factors
are represented as a real number between 0 and 1, and their mutual influences are represented by a
set of differential equations. For a more detailed description, see Ref. 1.
D. Information Presentation
This model construct
30
consists of two interacting dynamical models, one to determine the
human’s functional state and one to determine the effects of the chosen type and form of information
presentation. The dynamic model for the functional state used was adopted from Ref. 1 and is
described in the previous section. The interaction from the model for information presentation to this
model for functional state takes place by affecting the task demands:
processing demand
in the
information presentation model has input to
task demands
in the functional state model.
The general paradigm of the relations within the presentation model is partially based on existing
models on workload that consider the fit between individual factors, such as coping capacity, effort,
motivation, on one side and work demands on the other side. One example of such a model can be
found in Ref. 29. This paradigm has been applied to the fit between the effort that a human is willing
to invest while performing a task and demand. Effort is determined by internal and external factors
while demand is imposed externally.
Presentation format aspects can be seen as a part of task demands that are imposed on a person
because a form of a presentation may change processing demands. On the other hand, some
presentation aspects, for example, background colour and luminance, can be seen as available
resources that help a person to perform a task. Luminance is regarded both as a part of demands and
as a part of resources in this model. All types of aspects converge into two more global internal
Air Transport and Operations Symposium 2012
5
factors that influence the task performance: physiological state of
alertness
and mental
information
processing
state of an operator. Among these concepts a distinction is made between the states of
available and used resources of alertness and information processing,
alertness utilisation
and
available effort
respectively, and the states of demand for alertness and information processing,
alertness demand
and
processing demand
. The fit between the usage of these capacities and the
demands determines the functioning of a human while performing a task, the
functioning fit
. Two
specific types of fit are considered:
alertness fit
and
processing fit
.
If the usage of capacities and demands are at the same level, the fits will be high. If the levels of
capacities and demands differ much, then the fits will be low. If both
alertness fit
and
processing fit
are high, then the
functioning fit
will be high. The absolute difference between capacities and
demands is evaluated, and there is no distinction between the situations of high capacity and low
demand vs. low capacity and high demand, because in the context of the given model both situations
are not efficient with respect to the usage of resources and correspond to poor fit.
E. Safety Culture
The model construct for safety culture is extensive and can be found in detail in Ref. 35. In this
paper, an application of the model to an occurrence reporting cycle is presented in the context of an
existing air navigation service provider. The main purpose of modelling was to establish how safety
culture described by a set of safety culture indicators related to safety occurrence reporting emerges
from the organisational structures and processes. The organisational model was specified using the
framework described in Section 2H below. It includes national culture aspects based on the cultural
framework by Hofstede
20
, in particular,
individualism
(IDV) - the degree to which individuals are
integrated into groups;
power distance index
(PDI) - the extent to which the less powerful members
of an organisation accept that power is distributed unequally; and
uncertainty avoidance index
(UAI)
dealing with individual’s tolerance for uncertainty and ambiguity.
One of the important internal states considered in the model is the agent’s commitment to safety.
Commitment to safety is determined largely by the agent’s maturity degree and the agent’s tasks. In
the theory of situational leadership
16
the agent’s maturity w.r.t. to a task is defined as an aggregate
of the agent’s experience, willingness and ability to take responsibility for the task. The agent’s
willingness to perform a task is determined by the agent’s confidence and commitment, which are
necessary for the air traffic control (ATC) task execution. The ability of an agent to perform a task is
determined by its knowledge and skills. The maturity value changes over time as a result of gaining
new knowledge and skills, and changing self-confidence of a controller.
In the model, the adequacy of the mental models for the ATC tasks depends on the sufficiency
and timeliness of training provided to the controller and the adequacy of knowledge about safety-
related issues. Such knowledge is contained in reports that resulted from safety-related activities:
final occurrence assessment reports resulted from occurrence investigations and monthly safety
overview reports.
The agent’s commitment to safety is also influenced by the perceived commitment to safety of
other team members and the management. An agent evaluates the management’s commitment to
safety by considering factors that reflect the management’s effort in contribution to safety
(investment in personnel and technical systems, training, safety arrangements). The perception of an
agent of the average commitment to safety of its team is based on the perception of commitment to
safety of the team supervisor and of other team members.
In such a way, the commitment value is calculated based on a feedback loop: the agent’s
commitment influences the team commitment, but also the commitment of the team members and of
the management influence the agent’s commitment.
F. Situation Awareness
The model construct for situation awareness
26
(SA) consists of four main components. Three
components are in line with the model of Endsley
12
, which includes the perception of cues, the
comprehension and integration of information, and the projection of information for future events. In
addition, the model extends the model presented in Ref. 12 by incorporating some sophisticated AI-
based inference algorithms based on mental models, as well as the notion of aggregated complex
beliefs.
The model functions as follows. Initially, the agent starts to observe within the world, and obtains
the results of these observations. These results are forwarded to the component responsible for the
Air Transport and Operations Symposium 2012
6
formation of beliefs about the current situation. In this component, two types of beliefs are
distinguished, namely
simple beliefs
, and
complex beliefs
. The simple beliefs concern simple
statements about the current situation that have a one-to-one mapping to observations, or have a
one to one mapping to another simple belief (e.g., I believe that an aircraft is ready for takeoff based
upon my observed communication about this). The complex beliefs are aggregations of multiple
beliefs and describe the situation in a composed manner. Both types of beliefs are represented in a
hybrid language, including both qualitative information (namely the information that the belief is
about) and quantitative information (namely a real number representing the time point to which the
belief refers, and a real number representing the strength of the
activity
of the belief in the mind of
the agent). Using the knowledge stored in the mental model (represented in terms of causal/temporal
relationships, see Ref. 26 for technical details), the component first of all derives simple beliefs about
the situation. Thereafter, the complex beliefs are derived from the simple beliefs, again using the
knowledge stored in the mental models. In case a complex belief is composed of multiple beliefs that
are in conflict, this complex belief will have a lower activation value (i.e., the agent is less certain
about it). In order to project the complex beliefs to the future situation, they are forwarded to the
component belief formation on future situation. Herein, again a mental model is used to make the
predictions. The judgment of the future situation that then follows is used to direct the observations
of the agent.
G. Trust
For the concept of trust, five closely related model constructs have been identified. Trust is based
on a number of factors, an important one being the agent’s own experiences with the subject of
trust; e.g., another agent. All five models in this section are based on this notion of experiences. The
basic model is presented in Ref. 28; all other models are variants and/or extensions of this model.
According to this basic model, each event that can influence the degree of trust is interpreted by the
agent to be either a
trust-negative experience
or a
trust-positive experience
. If the event is
interpreted to be a trust-negative experience the agent will lose his trust to some degree, if it is
interpreted to be trust-positive, the agent will gain trust to some degree.
The first model is a simple qualitative model that assumes four discrete trust states (unconditional
trust, conditional trust, conditional distrust, unconditional distrust) and transitions between these
states based on experiences. In Ref. 28 it is also explained how this model can be converted to a
quantitative model, where the four trust states are replaced by a simple numerical trust value (in the
domain [0,1]), and the transitions are replaced by mathematical formulae. The four other models
extend this basic model in various ways. In particular, the second model
25
addresses relative trust
(i.e., the idea that trust in one entity has an impact on the trust in other entities) and the notion of
exploration of trustees during decision making (i.e., exploring their trustworthiness by having
interactions with them). The third model
32
addresses affective factors (e.g., the interaction between
feelings triggered by certain entities and the trust in those entities). The fourth model
27
addresses
population-based trust (i.e., the notion of a collective trust state of a whole group of agents in one
particular entity), and the fifth model
24
addresses biased trust (where biases refer, for instance, to the
phenomenon of having more trust in agents within the own social group).
H. Formal Organisation
This model construct
33
can be used to model formal organisations from three interrelated
perspectives (views): the process-oriented view, the performance-oriented view, and the
organisation-oriented view. A formal organisation is imposed on organisational agents, described in
the agent-oriented view. The internal structures and behaviour of agents are not defined by the
formal organisation. Agents may or may not follow the prescriptions of the formal organisation.
The process-oriented view contains information about the organisational functions, how they are
related, ordered and synchronised and the resources they use and produce. The main concepts are:
task, process, and resource type, which, together with the relations between them, are specified in a
formal language. A
task
represents a function performed in the organisation and is characterised by
name
,
maximal
and
minimal duration
. Tasks can range from very general to very specific. General
tasks can be decomposed into more specific ones using AND- and OR-relations thus forming
hierarchies. A
workflow
is defined by a set of (partially) temporally ordered
processes
. Each process is
defined using a task as a template and inherits all characteristics of the task. Decisions are also
treated as processes associated with decision variables taking as values the possible decision
Air Transport and Operations Symposium 2012
7
outcomes. The (partial) order of execution of processes in the workflow is defined by sequencing,
branching, cycle and synchronisation relations specified by the designer.
Tasks use/consume/produce resources of different types. Resource types describe tools, supplies,
components, data or other material or digital artifacts and are characterised by
name
,
category
(discrete, continuous),
measurement_unit
,
expiration_duration
(the length of the time interval when a
resource type can be used). Resources are instances of resource types and inherit their
characteristics, having, in addition,
name
and
amount
. Some resources can be shared, or used
simultaneously, by a set of processes (e.g., storage facilities, transportation vehicles). Alternative sets
of processes sharing a resource can be defined.
Central notions in the performance-oriented view are goal and performance indicator (PI). A PI is
a quantitative or qualitative indicator that reflects the state/progress of the company, unit or
individual. The characteristics of a PI include, among others:
type
– continuous, discrete;
unit
of
measurement;
time_frame
– the length of the time interval for which it will be evaluated;
scale
of
measurement;
source
– the internal or external source used to extract the PI: company policies,
mission statements, business plan, job descriptions, laws, domain knowledge, etc.;
owner
– the
performance of which role or agent does it measure/describe;
hardness
– soft or hard, where soft
means not directly measurable, qualitative, e.g. customer’s satisfaction, company’s reputation,
employees’ motivation, and hard means measurable, quantitative, e.g., number of customers, time to
produce a plan.
PIs can be related through various relationships. The following are considered in the framework:
(strongly) positive/negative causal influence of one PI on another, positive/negative correlation
between two PIs, aggregation – two PIs express the same measure at different aggregation levels.
Such relationships can be identified using e.g. company documents, domain knowledge, inference
from known relations, statistical or data mining techniques, knowledge from other structures of the
framework. Using these relations, a graph structure of PIs can be built.
Based on PIs, PI expressions can be defined as mathematical statements over PIs that can be
evaluated to a numerical, qualitative or Boolean value. They are used to define goal patterns. The
type
of a goal pattern indicates the way its property is checked:
achieved (ceased)
– true (false) for a
specific time point;
maintained
(
avoided
) – true (false) for a given time interval;
optimised
– if the
value of the PI expression has increased, decreased or approached a target value for a given interval.
Goals are objectives that describe a desired state or development and are defined by adding to
goal patterns information such as desirability and priority. The characteristics of a goal include,
among others:
priority
;
evaluation type
– achievement goal (based on achieved/ceased pattern –
evaluated for a time point) or development goal (based on maintained/avoided/optimised pattern –
evaluated for a time interval);
horizon
– for which time point/interval should the goal be satisfied;
hardness
– hard (satisfaction can be established) or soft (satisfaction cannot be clearly established,
instead degrees of
satisficing
are defined);
negotiability
.
A goal can be refined into sub-goals forming a hierarchy. Information about the satisfaction of
lower-level goals can be propagated to determine the satisfaction of high-level goals. A goal can be
refined into one or more alternative goal lists of AND-type or balanced-type (more fine-tuned ways of
decomposition - inspired by the weighted average function). For each type, propagation rules are
defined.
In the organisation-oriented view organisations are modelled as composite roles that can be
refined iteratively into a number of (interacting) composite or simple roles, representing as many
aggregation levels as needed. The refined role structures correspond to different types of
organisation constructs (e.g., groups, units, departments). Yet many of the existing modelling
frameworks are able to represent only two or three levels of abstraction: the level of a role, the level
of a group composed of roles, and the overall organisation level. The organisation-oriented view
provides means to structure and organise roles by defining interaction and power relations on them.
One of the aims of an organisational structure is to facilitate the interaction between the roles that
are involved into the execution of the same or related task(s). Therefore, patterns of role interactions
are usually reflected in an organisation structure. Each role has an input and an output interface,
which facilitate in the interaction (in particular, communication) with other roles and the environment.
Besides interaction relations, also power relations on roles constitute a part of the formal
organisational structure. Formal organisational power (authority) establishes and regulates normative
superior-subordinate relationships between roles. Authority relations are defined w.r.t. tasks. Roles
have rights and responsibilities related to different aspects of tasks (e.g., execution, monitoring,
Air Transport and Operations Symposium 2012
8
consulting, and making technological and/or managerial decisions). Roles with managerial rights may
under certain conditions authorise and/or make other roles responsible for certain aspects of task
execution. In many modern organisations rewards and sanctions form a part of authority relation,
thus, they are explicitly defined by appropriate language constructs.
I. Learning
This model construct
38
addresses learning in the context of decision making. In decision making
tasks different options are compared in order to make a reasonable choice out of them. Options
usually have emotional responses associated to them, relating to a prediction of a rewarding or
aversive consequence. In decisions such an emotional valuing often plays an important role. In recent
neurological literature this has been related to a notion of value as represented in the amygdala. In
order to learn the emotional responses, experiences with the environment (from the past) are used.
Hence, by learning processes the decision making mechanism is adapted to these experiences, so
that the decision choices made are reasonable or in some way rational, given the enviroment
reflected in these past experiences.
The computational model is based on such neurological notions as valuing in relation to feeling,
body loop and as-if body loop
11
. The adaptivity in the model is based on Hebbian learning
15
.
J. Goal-oriented attention
This model construct
7
describes how an ‘ambient’ agent (either human of artificial) can analyse
another agent’s state of attention, and to act according to the outcomes of such an analysis and its
own goals. This process requires some specific facilities:
A
representation of a dynamical model
is needed describing the relationships between different
mental states of the other agent. Such a model may be based on qualitative causal relations, but it
may also concern a numerical dynamical system model that includes quantitative relationships
between the other agent’s mental states. In general such a model does not cover all possible mental
states of the other agent, but focuses on certain aspects, for example on beliefs and desires, on
emotional states, on the other agent’s awareness states, or on attentional states. In this case, indeed
a model of (bottom-up) attention is used for this (as described in Section 2A).
Furthermore,
reasoning methods to generate beliefs on the other agent’s mental state
are needed
to draw conclusions based on the attention model and partial information about the other agent’s
mental states. This may concern deductive-style reasoning methods performing forms of simulation
based on known inputs to predict certain output, but also abductive-style methods reasoning from
output of the model to (possible) inputs that would explain such output.
Moreover, when in one way or the other an estimation of the other agent’s mental state has been
found out, it has to be
assessed whether there are discrepancies
between this state and the agent’s
own goals. Here also the agent’s self-interest comes in the play. It is analysed in how far the other
agent’s mental state is in line with the agent’s own goals, or whether a serious threat exists that the
other agent will act against the agent’s own goals.
Finally a
decision reasoning model
is needed to decide how to act on the basis of all of this
information. Two types of approaches are possible. A first approach is to take the other agent’s state
for granted and prepare for the consequences to compensate for them as far as these are in conflict
with the agent’s own goals, and to cash them as far as they can contribute to the agent’s own goals
(
anticipation
). For the case of air traffic control, an example of anticipation is when it is found out
that the other agent has no attention for a particular object, and it is decided that another colleague
or computer system will handle it (dynamic task reallocation). A second approach is not to take the
other agent’s mental state for granted but to decide to try to get it adjusted by affecting the other
agent, in order to obtain a mental state of the other agent that is more in line with the agent’s own
goals (
manipulation
). Both approaches can be used for training as well as for real world support.
K. Extended Mind
This model construct
6
represents the philosophical notion of an
extended mind
, which can be
roughly defined as an ‘external state of the environment that has been created by an agent and helps
this agent in its mental processing’
10
. The model can be used to explain the similarities and
differences between reasoning based on internal mental states (e.g., beliefs) and reasoning based on
external mental states (e.g., notes on a piece of paper, or flight process strips in an aviation context).
Air Transport and Operations Symposium 2012
9
To illustrate the model, think of a scenario where a person reads a user manual in order to build a
closet. In this case, according to a normal situation, the human would first read this manual, would
then generate a belief about how the closet should be built, after which (s)he would actually build the
closet, resulting in the actual presence of the closet. In terms of the extended mind model, instead of
an internal belief about the closet, an external mental state is used. This state could represent, for
instance, a picture that the person drew on paper. Thus, instead of remembering how the closet
should be built, these instructions are written down, after which they are read and executed.
This model illustrates that for a concrete task, the consequences of using an ‘extended mind’ state
are on the one hand that less internal states (thus: less cognitive capabilities) have to be exploited in
the reasoning process. On the other hand, a more intensive interaction with the external world (in the
sense of continuously creating and observing the external mental state) is needed. In terms of
situation awareness, this could be represented by making some beliefs (which in a way are replaced
by the external mental states) less easily accessible.
III. Coverage of Hazards by Model Constructs
In the previous section, a total of 11 agent-based model constructs (co-)developed by VU are
described. In the current section it is analysed to what extent these model constructs extend the
hazard modelling capability of the existing TOPAZ agent-based model constructs assessed in Ref. 37.
The assessment whether hazards are ‘covered’ by a certain model construct is here done in an
informal way. That is, for each hazard-model construct combination, the nature of the hazard in
question was studied and by performing ‘mental simulation’ (i.e., imagining that the model construct
in question was actually executed) the analyst would assess whether this ‘simulation’ could reproduce
the hazard as described in Ref. 37. Thus, in some cases full modelling of a particular hazard could be
done by a combination of multiple model constructs.
Table 1. Coverage of Hazards per Cluster.
Between brackets are the results obtained in Ref. 37.
Hazard cluster
Total number of
hazards
Hazard coverage
Well
covered
Partly
covered
Not
covered
Aircraft systems 14 13
(11)
93%
(79%)
0
(2)
0%
(14%)
1
(1)
7%
(7%)
Navigation systems 8 7
(7)
88%
(88%)
0
(0)
0%
(0%)
1
(1)
13%
(13%)
Surveillance systems 14 14
(14)
100%
(100%)
0
(0)
0%
(0%)
0
(0)
0%
(0%)
Speech-based communication 19 16
(13)
84%
(68%)
0
(2)
0%
(11%)
3
(4)
16%
(21%)
Datalink-based communication 10 10
(9)
100%
(90%)
0
(0)
0%
(0%)
0
(1)
0%
(10%)
Pilot performance 62 53
(31)
85%
(50%)
7
(13)
11%
(21%)
2
(18)
3%
(29%)
Controller performance 55 48
(23)
87%
(42%)
4
(7)
7%
(13%)
3
(25)
5%
(45%)
ATC systems 13 10
(7)
77%
(54%)
1
(2)
8%
(15%)
2
(4)
15%
(31%)
ATC coordination 12 9
(8)
75%
(67%)
0
(0)
0%
(0%)
3
(4)
25%
(33%)
Weather 14 2
(2)
14%
(14%)
4
(4)
29%
(29%)
8
(8)
57%
(57%)
Traffic 17 13
(13)
76%
(76%)
1
(0)
6%
(0%)
3
(4)
18%
(24%)
Infrastructure & environment 12 11
(11)
92%
(92%)
0
(0)
0%
(0%)
1
(1)
8%
(8%)
Other 16 6
(6)
38%
(38%)
1
(0)
6%
(0%)
9
(10)
56%
(63%)
Total 266 212
(155)
80%
(58%)
18
(30)
7%
(11%)
36
(81)
14%
(30%)
Air Transport and Operations Symposium 2012
10
A detailed overview of the results of this analysis is presented in Ref. 9. This document shows for
each hazard if it can be (partly) modelled and by which combination of TOPAZ and VU model
constructs. Based on this analysis, Table 1 provides an overview of the numbers of hazards that are
(partly) covered by (one or more TOPAZ and/or VU) model constructs per hazard cluster (as defined
in Ref. 37). The numbers between brackets indicate the model coverage based on the TOPAZ model
constructs that were already identified in Ref. 37.
Due to space limitations, the detailed analysis explaining which model constructs can be used to
cover which hazard is not included in this paper (see Ref. 9 for this purpose). However, a sketch of
such an analysis for one individual hazard is as follows. Hazard #325 is defined asimproper use of
flight progress strips”. For coverage of this hazard, two model constructs were considered relevant,
namely the model constructs for situation awareness (Section 2F) and extended mind (Section 2K).
The underlying explanation is that flight progress strips can be represented within the extended mind
model construct as an ‘extended mental state’. Since this mental state has to be created and
observed by the controller, an additional point for potential error is introduced in the creation of this
state. Next, within the situation awareness model construct, these flight progress strips play the role
of the mental model upon which situation awareness is built. Hence, in case this mental model is
impaired, the creation of correct situation awareness will also be influenced negatively.
As can be seen in Table 1, complementing the existing TOPAZ model constructs with the library of
model constructs (co-)developed by VU has led to a significantly increased percentage in agent-based
model coverage of hazards. In particular, 80% of the generalised hazards is now well captured by the
considered combination of model constructs (which was 58% based on the existing TOPAZ model
constructs), 7% is partly captured (which was 11%), and 14% is not captured (which was 30%).
When analysing the table in detail, it is evident that the largest gain was made in the human
performance-related hazard clusters (Pilot performance, Controller performance). This also shows
that the clusters that still have significant percentages of unmodelled hazards are related to ‘Weather’
and ‘Other’.
IV. Conclusion
In this paper, 11 model constructs (co-)developed by VU have been identified, and by an informal
comparison these model constructs have been found capable of modelling many of the remaining
hazards identified in Ref. 37. In particular, the percentage of hazards that has thus been found to be
well modelled has increased from 58% to 80%. The largest gain in percentage comes from human
related hazards. In particular, the percentage in modelling hazards related to pilot performance
increases from 50% to 85% and for controller performance related hazards from 42% to 87%.
The identification of these 11 complementary model constructs forms an important step in
improving the modelling of hazards in ATM. In subsequent stages of the MAREA project, additional
agent-based model constructs will be identified through exploring other literature sources, and the
integration of all model constructs will be addressed. In particular order to fully assess the
applicability of the model constructs, they will be tailored to the ATM domain, formalised and
potentially be integrated with other model constructs. Since the model constructs have been set up in
a generic manner, no large changes in their structure are expected. For example, the Extended Mind
model can be applied to the ATM domain in a rather straightforward manner, by defining flight
progress strips as a particular instance of an ‘external mental state’, and defining the interaction that
an operator has with such strips in terms of the observation and action states that are part of the
model. Moreover, to fully explore the emergent behaviour of a complex system like ATM from a
Resilience Engineering perspective, the integration of multiple model constructs will be an important
next step. This integration may be achieved in various ways. For instance, the Operator Functional
State model (OFS) may be connected to the Bottom-up Attention model by establishing an explicit
relation between the experience pressure state in the OFS model and the available attention state in
the Attention model (e.g., by stating that more experienced pressure leads to less available
attention). Other potential combinations include the integration of the Decision Making (DM) model
with the Situation Awareness (SA) model, the integration of the DM model with the Trust model, the
integration of the SA model with the OFS model, and the integration of the SA model with the Trust
model.
In follow-up research, validation of the formalised model constructs will be pursued by performing
‘proof-of-concept simulations’. These simulations will qualitatively describe ways that hazards can
Air Transport and Operations Symposium 2012
11
evolve in ATM scenarios (similar to use case modelling within Software Engineering). The behaviour
of the models will be evaluated by comparing it against a second set of hazards (as defined in Ref.
37) and by having experts judge the plausibility of the resulting proof-of-concept simulations.
Acknowledgments
This work is part of the SESAR WP-E programme on long-term and innovative research in ATM. It
is co-financed by Eurocontrol on behalf of the SESAR Joint Undertaking.
References
1
Blom, H.A.P., Bakker, G.J., Blanker, P.J.G., Daams, J., Everdij, M.H.C., and Klompstra, M.B., “Accident risk
assessment for advanced air traffic management”. In G. L. Donohue & A. G. Zellweger (Eds.),
Air Transport
Systems Engineering
, pp. 463-480, AIAA, 2001.
2
Blom, H.A.P., Stroeve, S.H., and Jong, H.H. de, “Safety risk assessment by Monte Carlo simulation of
complex safety critical operations”. In F. Redmill & T. Anderson (Eds.),
Developments in Risk-based Approaches
to Safety: Proceedings of the Fourteenth Safety-citical Systems Symposium, Bristol, U.K.
: Springer, 2006.
3
Bosse, T., Both, F., Lambalgen, R. van, and Treur, J., “An Agent Model for a Human's Functional State and
Performance”. In: Jain, L., Gini, M., Faltings, B.B., Terano, T., Zhang, C., Cercone, N., and Cao, L. (eds.),
Proceedings of the Eighth IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT'08
. IEEE
Computer Society Press, 2008, pp. 302-307.
4
Bosse, T., Jonker, C.M., Meij, L. van der, Sharpanskykh, A., and Treur, J., “Specification and Verification of
Dynamics in Agent Models”,
International Journal of Cooperative Information Systems
, vol. 18, 2009, pp. 167 -
193.
5
Bosse, T., Jonker, C.M., Meij, L. van der, and Treur, J., “A Language and Environment for Analysis of
Dynamics by Simulation”,
International Journal of Artificial Intelligence Tools
, vol. 16, 2007, pp. 435-464.
6
Bosse, T., Jonker, C.M., Schut, M.C., and Treur, J., „Simulation and Analysis of Shared Extended Mind”,
Simulation Journal (Transactions of the Society for Modelling and Simulation)
, vol. 81, 2005, pp. 719 - 732.
7
Bosse, T., Lambalgen, R. van, Maanen, P.P. van, and Treur, J., “A System to Support Attention Allocation:
Development and Application”,
Web Intelligence and Agent Systems Journal
, vol. 10, 2012, pp. 1-17.
8
Bosse, T., Maanen, P.-P. van, Treur, J., “Simulation and Formal Analysis of Visual Attention”,
Web
Intelligence and Agent Systems Journal
, vol. 7, 2009, pp. 89-105.
9
Bosse, T., Sharpanskykh, A., Treur, J., Blom, H.A.P., and Stroeve, S.H.,
Library of existing VU model
constructs
. Technical report for the SESAR WP-E project MAREA, E.02.10-MAREA-D2.1, 2012.
10
Clark, A., and Chalmers, D., The Extended Mind. In:
Analysis
, vol. 58, 1998, pp. 7-19.
11
Damasio, A., “The Somatic Marker Hypothesis and the Possible Functions of the Prefrontal Cortex”,
Philosophical Transactions of the Royal Society: Biological Sciences
, 351, 1996, pp. 1413-1420.
12
Endsley, M.R., “Toward a theory of Situation Awareness in dynamic systems”,
Human Factors 37(1)
, 1995,
pp. 32-64.
13
Eurocontrol,
Air navigation system safety assessment methodology
. SAF.ET1.ST03.1000-MAN-01, edition
2.0.
14
Everdij, M.H.C., Blom, H.A.P., and Stroeve, S.H., “Structured assessment of bias and uncertainty in Monte
Carlo simulated accident risk”,
Proc. 8
th
Int. Conf. on Probabilistic Safety Assessment and Management (PSAM8)
,
May 2006, New Orleans, USA.
15
Hebb, D.O.,
The Organization of Behaviour.
New York: John Wiley & Sons, 1949.
16
Hersey, P., Blanchard, K.H., Johnson, D.E.,
Management of Organizational Behavior: Leading Human
Resources
, 2001.
17
Hesslow, G., “Conscious thought as simulation of behaviour and perception”,
Trends in Cognitive Science
, 6,
2002, pp. 242-247.
18
Hill, D.W., “The critical power concept”,
Sports Medicine
, vol.16, 1993, pp. 237-254.
19
Hockey, G.R.J., “Compensatory Control in the Regulation of Human Perfomance under Stress and High
Workload: a Cognitive-Energetical Framework”,
Biological Psychology 45
, 1997, 73-93.
20
Hofstede, G.,
Cultures and Organizations
. McGraw-Hill, 2005.
21
Hollnagel, E., Nemeth, C.P., and Dekker, S.,
Resilience Engineering Perspectives
, Volume 1: Remaining
sensitive to the possibility of failure. Ashgate, Aldershot, England, 2008.
22
Hollnagel, E., and Woods, D.D.,
Joint cognitive systems: Foundations of cognitive systems engineering
. CRC
Press, Boca Raton (FL), USA, 2005.
23
Hollnagel, E., Woods, D.D., and Leveson, N.,
Resilience engineering: Concepts and precepts
. Ashgate,
Aldershot, England, 2006.
24
Hoogendoorn, M., Jaffry, S.W., Maanen, P.P. van, and Treur, J., “Modeling and Validation of Biased Human
Trust”. In: Boissier, O., et al. (eds.),
Proceedings of the 11th IEEE/WIC/ACM International Conference on
Intelligent Agent Technology, IAT'11
. IEEE Computer Society Press, 2011, pp. 256-263.
25
Hoogendoorn, M., Jaffry, S.W., and Treur, J., “An Adaptive Agent Model Estimating Human Trust in
Information Sources”. In: Baeza-Yates, R., Lang, J., Mitra, S., Parsons, S., Pasi, G. (eds.),
Proceedings of the 9th
Air Transport and Operations Symposium 2012
12
IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT'09
. IEEE Computer Society Press,
2009, pp. 458-465.
26
Hoogendoorn, M., Lambalgen, R. van, and Treur, J., Modeling Situation Awareness in Human-Like Agents
using Mental Models”. In: Walsh, T. (ed.),
Proceedings of the Twenty-Second International Joint Conference on
Artificial Intelligence, IJCAI'11
. AAAI Press, 2011, pp. 1697-1704.
27
Jaffry, S.W., and Treur, J., “Modelling Trust for Communicating Agents: Agent-Based and Population-Based
Perspectives”. In: Jedrzejowicz, P., Nguyen, N.T., Hoang, K. (eds.),
Proceedings of the Third International
Conference on Computational Collective Intelligence, ICCCI'11, Part I
. Lecture Notes in Artifical Intelligence, vol.
6922. Springer Verlag, 2011, pp. 366-377.
28
Jonker, C.M., and Treur, J., “Formal Analysis of Models for the Dynamics of Trust based on Experiences”.
In: F.J. Garijo, M. Boman (eds.),
Multi-Agent System Engineering, Proceedings of the 9th European Workshop on
Modelling Autonomous Agents in a Multi-Agent World, MAAMAW'99.
Lecture Notes in AI, vol. 1647, Springer
Verlag, Berlin, 1999, pp. 221-232.
29
Macdonald, W., “The impact of Job Demands and Workload on Stress and Fatigue”,
Australian Psychologist
,
vol. 38(2), 2003, pp. 102-117.
30
Mee, A. van der, Mogles, N.M., and Treur, J., “An Integrative Agent Model for Adaptive Human-Aware
Presentation of Information During Demanding Tasks”. In: J. Liu et al. (eds.),
Proceedings of the International
Conference on Active Media Technology, AMT'09
. Lecture Notes in Computer Science, vol. 5820. Springer Verlag,
2009, pp. 54-68.
31
Pinder, C.C.,
Work motivation in organizational behavior
. NJ: Prentice-Hall, 1998.
32
Sharpanskykh, A., “Agent-based Modeling and Analysis of Socio-Technical Systems”,
Cybernetics and
Systems
. Special Issue on Agent and Multi-Agent Systems, Jedrzejowicz, P., Nguyen N.T., and Szczerbicki, E.
(eds.), 42(5), 2011, pp. 308-323.
33
Sharpanskykh, A.,
On Computer-Aided Methods for Modeling and Analysis of Organizations
, PhD thesis, VU
University Amsterdam, 2008.
34
Sharpanskykh, A., and Stroeve, S.H., “An Agent-based Approach to Modeling and Analysis of Safety Culture
in Air Traffic”, In: Proceedings of SocialCom-09/SIN-09 conference, IEEE CS Press, 2009, pp.229-236.
35
Sharpanskykh, A., and Stroeve, S., “An Agent-based Approach for Structured Modeling, Analysis and
Improvement of Safety Culture”,
Computational and Mathematical Organization Theory
, Vol. 17, 2011, pp.77–
117.
36
Sharpanskykh, A., and Treur, J., “Adaptive Modelling of Social Decision Making by Affective Agents
Integrating Simulated Behaviour and Perception Chains”. In: Pan, J.-S., Chen, S.-M., and Kowalczyk, R. (eds.),
Proceedings of the Second International Conference on Computational Collective Intelligence, ICCCI'10
. Lecture
Notes in Artificial Intelligence, Springer Verlag, 2010, pp. 284-296.
37
Stroeve, S., Everdij, M., and Blom, H.,
Hazards in ATM: model constructs, coverage and human responses
.
Technical report for the SESAR WP-E project MAREA, E.02.10-MAREA-D1.2, 2011.
38
Treur, J., and Umair, M., “On Rationality of Decision Models Incorporating Emotion-Related Valuing and
Hebbian Learning”. In: Lu, B.-L., Zhang, L., and Kwok, J. (eds.),
Proceedings of the 18th International
Conference on Neural Information Processing, ICONIP'11, Part III
. Lecture Notes in Artificial Intelligence, vol.
7064, pp. 217–229. Springer Verlag, 2011.
... In spite of NLR's TOPAZ experience in modelling hazards in agent-based safety risk analysis of ATM operations, there still are many hazards in the database for which it is not yet clear how they can be modelled in an agent-based framework. In order to improve this situation, a series of studies [24,9,8] have been conducted to identify a library of model constructs that allows to capture most remaining hazards in an agent-based model of an ATM operation. The objective of the current paper is to integrate these agent-based model constructs into a single multi-agent model. ...
... This article is structured as follows. First, in Section II, an overview is given of the library of model constructs identified in the earlier studies [24,9,8]. Section III presents the integration of these model constructs in an agent-based model on a conceptual level. ...
... During the second phase the same is done for existing agent-based model constructs previously developed by VU Amsterdam. This has been reported in [9]. During the third phase, novel model constructs are being developed specifically to model hazards for which the model constructs from the first two phases fall short. ...
Full-text available
Conference Paper
Air Traffic Management (ATM) forms a large and complex socio-technical system which includes a variety of interacting human and technical agents. These interactions may emerge into various types of nominal and off-nominal behaviours. Agent-based modelling and simulation can provide a systematic analysis of such emergent behaviours in ATM. In order to improve the agent-based modelling, in earlier research a library of agent-based model constructs for hazards in ATM has been established. The objective of the current paper is to integrate these agent-based model constructs into a large multi-agent model. To illustrate the integration approach, a formal description of a selected combination of model constructs is presented and the results are discussed.
... In analysing hazards and incident scenarios in Air Traffic Management (ATM), agent-based modelling has proved to be a fruitful approach (Blom et al., 2001;Bosse et al., 2012a). However, when studying realistic scenarios, it has been found that many of them show a complex interaction of a number of aspects. ...
... Bosse et al., 2008), the Situation Awareness model (SA; cf. Hoogendoorn et al., 2011), and a decision model (DM; inspired by Bosse et al., 2012a)). These are the three models on which this section focuses. ...
... The next step is to integrate the OFS and SA models addressed above with the model DM for decision making, taken from (Bosse et al., 2012a). An overview of the different connections for this integration is shown in Fig. 2. The obtained patterns are as follows OFS model  experienced pressure  SA model  adequacy of beliefs adequacy of beliefs  DM model  adequacy of initiated actions So, the OFS model affects via experienced pressure the adequacy of beliefs generated by the SA model, and the adequacy of beliefs resulting from the SA model is a basis for adequacy of initiated actions. ...
Full-text available
Conference Paper
Aviation incidents often have a complex character in the sense that a number of different aspects of human and technical functioning come together in creating the incident. Usually only model constructs or computational agent models are available for each of these aspects separately. To obtain an overall model, these agent models have to be integrated. In this paper, existing agent models are used to develop a formal, executable agent model of a real-world scenario concerning an aircraft that descends below the minimal descent altitude because of impaired conditions of the flight crew members. Based on the model, a few proof-of-concept simulations are generated that describe how such hazardous scenarios can evolve.
... During the second phase the same is done for existing agent-based model constructs developed by VU Amsterdam. This has been reported in [5,6]. During the third phase, novel model constructs are being developed specifically to model hazards for which the model constructs from the first two phases fall short. ...
... This paper is structured as follows. Section II provides a brief summary of all of the agent-based model constructs that have been analysed during the first two phases [6,41]. It also explains how well the various hazards in the database are modelled. ...
... In the second phase of the hazard modelling study [6], 11 agent-based model constructs have been identified that follow the modelling and analysis approach used by the Agent Systems research group at VU University Amsterdam. These 11 agent-based model constructs have an emphasis on human factors, and are briefly summarised in Table II (see [6] for details). ...
Full-text available
Conference Paper
This paper studies agent-based modelling of hazards in Air Traffic Management (ATM). The study adopts a previously established large database of hazards in current and future ATM as point of departure, and explores to what extent agent-based model constructs are able to model these hazards. The agent based modelling study is organized in three phases. During the first phase existing agent-based model constructs of the TOPAZ safety risk assessment methodology are compared against the hazards in the database. During the second phase the same is done for existing agent-based model constructs developed by VU Amsterdam. During the third phase, novel model constructs are being developed specifically to model hazards that are not modelled well by the model constructs from the first two phases. The focus of this paper is on describing the model constructs of the third phase. All model constructs from the three phases together are also analysed with respect to the extent to which they model all hazards in the database. The results indicate that the total set of model constructs is capable to model 92% of the hazards well, 6% of the hazards partly and only 2% of the hazards not.
... 'Mental simulation' with these 13 model constructs showed that 58% of the hazards could be modelled well, 11% could be partially modelled and 30% could not be modelled by these 13 model constructs. During the second phase, 11 complementary model constructs have been identified through searching human performance sub-models that have been applied by the Agent Systems research group at VU University Amsterdam [30]; these are shown inTable III. After this extension, 80% of the hazards could be well modelled, 7% could be partially modelled, and 14% could not be modelled. ...
... Through a series of studies293031, 38 specific model constructs have been identified that allow to model (partially) well over 97% of the large set of hazards considered. Of these 38 model constructs, 13 are commonly in use within the TOPAZ safety risk assessment methodology, and 25 are novel. ...
Full-text available
Conference Paper
One of the key steps in safety risk assessment of an Air Traffic Management (ATM) operation is to identify as many potential hazards as is possible. All these potential hazards have to be analysed upon their possible contribution to safety risk of the operation considered. In an agent-based safety risk assessment of ATM operations there are two approaches towards the assessment of the safety risk impacts of hazards. The direct way is to incorporate the hazard in the agent-based model, and to assess this agent-based model on safety risk by conducting Monte Carlo simulations. The alternative is to avoid the modelling of a potential hazard in the agent-based model, and instead assess the impact of the hazard on safety risk through sensitivity analysis and bias and uncertainty assessment. Because agent-based modelling and simulation of hazards might reveal emergent behaviour that remains invisible through sensitivity analysis, there is a need to understand how to model various hazard types in an agent-based model. In order to comply with this need, this paper identifies 38 model constructs that are able to capture more than 97% of the potential ATM related hazards in an agent-based model. The paper also shows that four of the five main model constructs are related to four widely used modelling domains in aviation, i.e. system reliability, human performance simulation, human reliability analysis, and aircraft trajectory simulation. However, the model construct that captures the highest percentage of hazards (41%) is related to the more recent domain of multi-agent systems modelling.
... This paper provides an overview of the main results of the research towards this end. Details on the research steps have been published in earlier conference papers [15][16][17][18][19] and in MAREA reports. This paper is organized as follows. ...
Full-text available
Conference Paper
The ability of the sociotechnical ATM system to adjust its functioning to changes and disturbances, and thereby sustain required operations is a key asset, in which human operators play crucial roles. Previously, we have shown that agent-based modelling can effectively support analysis of the safety implications of the behaviour of interacting human operators and technical systems in their effort to deal with disturbances in ATM. In this paper we provide an overview of a library of model constructs for agent-based modelling in ATM and we show the integration of these model constructs. We show that the library of model constructs can effectively model a large set of hazards in ATM and we discuss ways towards effective use of these models for the analysis of safety-focused resilience.
Full-text available
Book
Nothing has been more prolific over the past century than human/machine interaction. Automobiles, telephones, computers, manufacturing machines, robots, office equipment, machines large and small; all affect the very essence of our daily lives. However, this interaction has not always been efficient or easy and has at times turned fairly hazardous. Cognitive Systems Engineering (CSE) seeks to improve this situation by the careful study of human/machine interaction as the meaningful behavior of a unified system. Written by pioneers in the development of CSE, Joint Cognitive Systems: Foundations of Cognitive Systems Engineering offers a principled approach to studying human work with complex technology. The authors use a top-down, functional approach and emphasize a proactive (coping) perspective on work that overcomes the limitations of the structural human information processing view. They describe a conceptual framework for analysis with concrete theories and methods for joint system modeling that can be applied across the spectrum of single human/machine systems, social/technical systems, and whole organizations. The book explores both current and potential applications of CSE illustrated by examples. Understanding the complexities and functions of the human/machine interaction is critical to designing safe, highly functional, and efficient technological systems. This is a critical reference for students, designers, and engineers in a wide variety of disciplines.
Full-text available
Conference Paper
Monte Carlo simulation of an accident risk model of a complex safety critical operation provides valuable feedback to the decision makers that are responsible for the safety of such operation. By definition, such a Monte Carlo simulation model differs from reality at various points and levels. Hence, the feedback to the decision makers should include an assessment of the combined effect of these differences in terms of bias and uncertainty at the simulated risk level. In literature the assessment of risk bias and uncertainty due to differences in parameter values has received most attention, e.g. Morgan and Henrion (1990) [1], Kumamoto and Henley (1996) [2]. Obviously, there are many other differences between model and reality than due to parameter value differences only. The paper presents a structured approach for the assessment of bias and uncertainty in Monte Carlo simulation of accident risk due to differences in parameter values as well as differences that fall beyond the parameter level. For the assessment of differences in parameter values we follow the first-order differential analysis of bias and uncertainty in the accident risk under log-normal assumptions, e.g. [1], and combine bias and uncertainty estimates of parameter values with log-normal risk sensitivities for these parameter variations. Because the number of parameter values may be large, this assessment is performed in two phases. In the first phase an initial bias and uncertainty assessment of parameter values is performed largely using expert knowledge. The second phase focuses on the parameter values that have the largest effect on the risk level; for these, statistical data is collected and sensitivity analysis is performed by running dedicated Monte Carlo simulations. For the assessment of bias due to other differences than parameter value differences, the paper combines the two structured approaches by Zio and Apostolakis (1996) [3]. One of their approaches assumes alternate hypotheses for the risk case considered, develops an alternate model for each alternate hypothesis, assesses the risk level for each alternate model, and elicits experts on the probability that each alternate model is correct. Their second approach uses an adjustment factor to compensate for differences between model and reality, and elicits experts for the estimation of this adjustment factor. The novelty in this paper is to combine, per non-parameter difference, one alternate hypothesis with one adjustment factor, and to evaluate the bias through the following two estimates for each non-parameter difference: 1. the probability that there is a difference, i.e. the alternate hypothesis is correct; and 2. the conditional risk bias given that the alternate hypothesis is correct, i.e. the conditional adjustment factor. These estimates per non-parameter difference are evaluated by teams of safety experts and operational experts, and then combined into an overall bias estimate for all non-parameter differences. The estimation of these two factors by experts appears to work quite naturally, especially since the estimation of the conditional risk bias is supported by the risk sensitivity knowledge for each of the model parameters stemming from assessment of the parameter value differences. The novel structured bias and uncertainty assessment approach is illustrated for a Monte Carlo simulation based accident risk assessment for an air traffic operation example.
Article
Where are the borders of mind and where does the rest of the world begin? There are two standard answers possible: Some philosophers argue that these borders are defined by our scull and skin. Everything outside the body is also outside the mind. The others argue that the meanings of our words "simply are not in our heads" and insist that this meaning externalism applies also to the mind. The authors are suggesting a third position, i.e. quite another form of externalism. Their so called active externalism implies an active involvement of the background in controlling the cognitive processes.
Chapter
In this article I discuss a hypothesis, known as the somatic marker hypothesis, which I believe is relevant to the understanding of processes of human reasoning and decision making. The ventromedial sector of the prefrontal cortices is critical to the operations postulated here, but the hypothesis does not necessarily apply to prefrontal cortex as a whole and should not be seen as an attempt to unify frontal lobe functions under a single mechanism. The key idea in the hypothesis is that 'marker' signals influence the processes of response to stimuli, at multiple levels of operation, some of which occur overtly (consciously, 'in mind') and some of which occur covertly (non-consciously, in a non-minded manner). The marker signals arise in bioregulatory processes, including those which express themselves in emotions and feelings, but are not necessarily confined to those alone. This is the reason why the markers are termed somatic: they relate to body-state structure and regulation even when they do not arise in the body proper but rather in the brain's representation of the body. Examples of the covert action of 'marker' signals are the undeliberated inhibition of a response learned previously; the introduction of a bias in the selection of an aversive or appetitive mode of behaviour, or in the otherwise deliberate evaluation of varied option-outcome scenarios. Examples of overt action include the conscious 'qualifying' of certain option-outcome scenarios as dangerous or advantageous. The hypothesis rejects attempts to limit human reasoning and decision making to mechanisms relying, in an exclusive and unrelated manner, on either conditioning alone or cognition alone.
Article
The basis of the critical power concept is that there is a hyperbolic relationship between power output and the time that the power output can be sustained. The relationship can be described based on the results of a series of 3 to 7 or more timed all-out predicting trials. Theoretically, the power asymptote of the relationship, CP (critical power), can be sustained without fatigue; in fact, exhaustion occurs after about 30 to 60 minutes of exercise at CP. Nevertheless, CP is related to the fatigue threshold, the ventilatory and lactate thresholds, and maximum oxygen uptake (V̇O2max), and it provides a measure of aerobic fitness. The second parameter of the relationship, AWC (anaerobic work capacity), is related to work performed in a 30-second Wingate test, work in intermittent high-intensity exercise, and oxygen deficit, and it provides a measure of anaerobic capacity. The accuracy of the parameter estimates may be enhanced by careful selection of the power outputs for the predicting trials and by performing a greater number of trials. These parameters provide fitness measures which are mode-specific, combine energy production and mechanical efficiency in 1 variable, and do not require the use of expensive equipment or invasive procedures. However, the attractiveness of the critical power concept diminishes if too many predicting trials are required for generation of parameter estimates with a reasonable degree of accuracy.