Content uploaded by David Romero
Author content
All content in this area was uploaded by David Romero on Dec 14, 2016
Content may be subject to copyright.
The Operator 4.0: Human Cyber-Physical Systems
& Adaptive Automation towards
Human-Automation Symbiosis Work Systems
David Romero1-2, Peter Bernus2, Ovidiu Noran2, Johan Stahre3, Åsa Fast-Berglund3
1 Tecnológico de Monterrey, Mexico
david.romero.diaz@gmail.com
2 Griffith University, Australia
O.Noran@griffith.edu.au, P.Bernus@griffith.edu.au
3 Chalmers University of Technology, Sweden
johan.stahre@chalmers.se, asa.fasth@chalmers.se
Abstract. A vision for the Operator 4.0 is presented in this paper in the context
of human cyber-physical systems and adaptive automation towards human-
automation symbiosis work systems for a socially sustainable manufacturing
workforce. Discussions include base concepts and enabling technologies for
the development of human-automation symbiosis work systems in Industry 4.0.
Keywords: Operator 4.0, Human Cyber-Physical Systems, Adaptive Automation,
Human-Automation Symbiosis, Socially Sustainable Manufacturing.
1 Introduction
Industry 4.0 enables new types of interactions between operators and machines [1],
interactions that will transform the industrial workforce and will have significant
implications for the nature of work, in order to accommodate the ever-increasing
variability of production. An important part of this transformation is the emphasis on
human-centricity of the Factories of the Future [2], allowing for a paradigm shift from
independent automated and human activities towards a human-automation symbiosis
(or ‘human cyber-physical systems’) characterised by the cooperation of machines with
humans in work systems and designed not to replace the skills and abilities of humans,
but rather to co-exist with and assist humans in being more efficient and effective [3].
In this sense, the history of the interaction of operators with various industrial
and digital production technologies can be summarised as a generational evolution.
Thus, Operator 1.0 generation is defined as humans conducting ‘manual and dextrous
work’ with some support from mechanical tools and manually operated machine tools.
Operator 2.0 generation represents a human entity who performs ‘assisted work’ with
the support of computer tools, ranging from CAx tools to NC operating systems (e.g.
CNC machine tools), as well as enterprise information systems. The Operator 3.0
generation embodies a human entity involved in ‘cooperative work’ with robots and
other machines and computer tools, also known as - human-robot collaboration. The
Operator 4.0 generation represents the ‘operator of the future’, a smart and skilled
operator who performs ‘work aided’ by machines if and as needed. It represents a new
design and engineering philosophy for adaptive production systems where the focus is
on treating automation as a further enhancement of the human’s physical, sensorial and
cognitive capabilities by means of human cyber-physical system integration (see Fig 1).
Fig. 1. Operator Generations (R)Evolution
This paper explores a vision for the Operator 4.0 in the context of human cyber-
physical systems and adaptive automation towards human-automation symbiosis work
systems for a socially sustainable manufacturing workforce. The discussions within the
following
sections
include
base
concepts
and
enabling
technologies
for
the
development
of the proposed human-automation symbiosis work systems in Industry 4.0.
2 Base Concepts
The
concept
of
Balanced
Automation
Systems
(BAS)
[4]
was
introduced
in
the
early
90’s
as
an
attempt
to
achieve
the
right
combination
of
automation
and
manual
operations
(cf.
Operator
2.0
&
3.0)
in
production
systems,
taking
into
account
economic
and
socio-
organisational aspects for the (re-)engineering of competitive and socially sustainable
production systems. BAS implementations have mainly been based on the principles
of ‘anthropocentric production systems’ [5] and the advantages offered by flexible
automation
as
an
extension
of
programmable
automation
in
manufacturing
systems.
In [6], it has been previously defined a Next Generation BAS concept with the aim of
stepping
beyond
the
‘right
balance’
between
automated
and
manual
tasks
in
production
systems, so as to the achieve ‘human-automation symbiosis’ for enhancing workforce
capabilities (cf. Operator 4.0) and increasing manufacturing flexibility (cf. Factory 4.0)
of production systems. The vision of Next Generation BASs is that while they will
still rely on the guidelines of ‘anthropocentric production systems’ [5], they will
moreover feature ‘adaptive automation’ [7-9] for the dynamic allocation of control
over manufacturing and assembly tasks to a human operator and/or a machine for
the purpose of optimising overall production system performance. This will be
done considering [10-11]: (a) sustainable technical and
economic
benefits
for
the
manufacturing
enterprise
(e.g.
improved
quality,
increased responsiveness, shorter
throughput times, easier planning and control of production processes,
increased
capacity for innovation and continual improvement) and (b) social-human benefits for
the workforce (e.g. increasing quality of working life, higher job satisfaction through
meaningful tasks, greater personal flexibility and adaptation, improved ability and skills
of shop-floor personnel).
Based on the previous context, we define Human Cyber-Physical Systems (H-CPS)
as systems engineered to: (a) improve human abilities to dynamically interact with
machines in the cyber- and physical- worlds by means of ‘intelligent’ human-machine
interfaces, using human-computer interaction techniques designed to fit the operators’
cognitive and physical needs, and (b) improve human physical-, sensing- and cognitive-
capabilities, by means of various enriched and enhanced technologies (e.g. using
wearable devices). Both H-CPS aims are to be achieved through computational and
communication techniques, akin to adaptive control systems with the human-in-the-loop.
The Adaptive Automation (AA) movement [7-8] [12-13] aims at optimising human-
machine cooperation to efficiently allocate labour (cognitive & physical) and distribute
tasks between the automated part and the humans in the workstations of an adaptive
production system [13]. AA allows the human and/or the machine to modify the level
of automation by shifting the control of specific functions whenever predefined
conditions (e.g. critical-event, measure-based and/or modelling-based) are met [14].
The ultimate AA goal is the achievement of human-automation symbiosis by means of
adaptation of automation & control across all workstations of a human-centred and
adaptive production system in order to allow a dynamic and seamless transition of
functions (tasks) allocation between humans and machines that optimally leverages
human skills to provide inclusiveness and job satisfaction while also achieving
production objectives.
Human-in-the-loop (HITL) feedback control systems are defined as systems that
require human interaction [15]. HITL control models offer interesting opportunities to
a broad range of H-CPS applications, such as the ‘Operator 4.0’. HITL control models
can help to supervise an operator’s performance in a human-machine interaction,
and (a) let the operator directly control the operation under supervisory control,
(b) let automation monitor the operator and take appropriate actions, or (c) an hybrid of
‘a’ and ‘b’, where automation monitors the operator, takes human inputs for the control,
and takes appropriate actions [15]. HITL control models, although being challenging
due to the complex physiological, sensorial and cognitive nature of human beings,
are an important enabler for ‘human-automation symbiosis’ achievement.
3 Human-Automation Symbiosis: Intelligent Hybrid Agents
In this section, the strategy to attain human-automation symbiosis in manufacturing
work systems is explored through a discourse of ‘adaptive automation’ and ‘intelligent
multi-agent systems’ as the bases for a sharing and trading of control strategy [14].
An intelligent agent is an entity (human, artificial or hybrid) with the following
characteristics [16]: (a) purposeful - displays goal-seeking behaviour, (b) perceptive
- can observe information about the surrounding world and filter it according to
relevance for orientation, (c) aware - can develop situational awareness that is relevant
for the agent’s purpose, (d) autonomous - can decide a course of action (plan) to achieve
the goal, (e) able to act - can mobilise its resources to act on its plan; these resources
may include parts of the self or tools at the autonomous disposal of the agent, and
resources for physical action or information gathering/processing, (f) reflective - can
represent and reason about the abilities and goals of self and those of other agents,
(g) adaptable and learning - can recognise inadequacy of its plan and modify it, or
change its goal, and (h) conversational and cooperative - can negotiate with other
agents to enhance perception, develop common orientation, decide on joint goals,
plans, and action; essentially participate in maintaining the ‘emergent agent’ created
through joint actions of agents. Note that this classification of agent functions may
be interpreted as the ability to perform the Observe, Orient, Decide and Act (OODA)
Loop of Boyd [17] [18], developed as a theory to explain the conditions and functions
of successful operation, and therefore this classification may be used to direct the
engineering and development of intelligent agents [16], which, as we shall see below,
are expected to be ‘adaptive’ and ‘hybrid’ in nature.
Human agents, under certain circumstances, and in defined domains of activity,
are able to act as intelligent agents (e.g. able to perform complex assembly sequences
and operations in a flexible production line). However, once the assumptions are
no longer true (e.g. due to a heavy physical, sensorial and/or cognitive workload),
the quality of agenthood deteriorates; thus the human does no longer have the ability to
perform one or several functions that are normally attributed to an intelligent agent.
Consequently, the question is: how to restitute human agenthood by extending human
capabilities (physical, sensorial and/or cognitive) through automation-aided means?
Similarly, artificial (machine) agents, under certain circumstances and in defined
domains of activity can act as intelligent agents (e.g. they are able to perform repetitive
and routine tasks in a high volume production line, make decisions based on learnt
patterns, etc.). Nevertheless, once the assumptions are no longer true (e.g. the need
(ability) to improvise and use flexible processes to reduce production downtime due
to an error), the quality of agenthood deteriorates; thus the machine does no longer
have the ability to perform one or several functions that are normally attributed
to an intelligent agent. Therefore, the question is: how to restore machine agenthood
by extending the machine’s capabilities through human-aided means?
Hybrid agents are intelligent agents established as a symbiotic relationship (human-
automation symbiosis) between the human and the machine, so that in situations where
neither would display agenthood in isolation, the symbiotic hybrid agent does. In this
research, the vision is that at any time a human (the ‘Operator 4.0’) lacks some of these
agenthood abilities, such as due to heavy physical, sensorial and/or cognitive workload,
automation will extend the human’s abilities as much as necessary to help the operator
to perform the tasks at hand, according to the expected quality of performance criteria.
Thus, it is proposed to implement hybrid agents, as a form of ‘adaptive automation’,
in order to sustain agenthood by determining whenever and wherever the operator
requires augmentation (e.g. using advanced trained classifiers to recognise this need
[19]), and prompting the appropriate type and level of automation to facilitate optimal
operator performance. An important objective is that the level of this extension need
not be a ‘design time’ decision, but should be able to be dynamically configured
as needed. Furthermore, the ‘hybrid agents’ view of the Operator 4.0 is a component
of the solution to preserve the operator’s situation awareness [20], as the status,
experience and information processing capability of the operator can cause loss of
agenthood and consequent decision-making errors, thus the need for ‘symbiotic
technical support’. Work on affective computing [21] showed that the task allocation
and adaptation between humans and machines/computers supporting them is not a
trivial task and should involve sensory assessments of humans’ physical and cognitive
states in order to be efficient.
For the purpose of comparison, in the case of an Operator 3.0 (cf. human-robot
collaboration), the design time decision would be determined by the required capability
of the manufacturing or assembly operation (e.g. speed, accuracy, capacity, reliability,
etc.), which then would decide (based on technical, economic, social and human
benefits) the level of automation of the process, as well as the accompanying skills and
abilities required by the human role. In contrast, in the case of an Operator 4.0,
automation level would be determined in less detail at design time, allowing an initial
detailed procedure and much automated support (e.g. in case of a novice or new-to-
the-task operator), while providing ‘on the fly’ solutions that develop together with
the individual operator’s skills. Apart from achieving job satisfaction and a variety
of desired process ‘ilities’ [22], such dynamic allocation of different levels/extent
of automation fosters the use of human skills and abilities. This includes the creation
of favourable conditions for workforce development and learning, the improvement of
human-robot collaboration and tacit knowledge development, as it is well known that
in many (although not all) tasks acting based on tacit knowledge are much more
efficient and effective than following predetermined procedures.
Emergent agents are virtual entities, who exist as a cooperative and negotiated
arrangement between multiple agents of either kind above (sometimes on multiple
levels of static or dynamic aggregation), whereupon two human agents, or a human and
a machine/robot, or two machine/robot agents, or more than two agents of any of these
types, form a ‘join entity agent’ that from the external observer’s viewpoint acts as a
single intelligent agent. It is expected that an Operator 4.0 will have the ability to be
part of an intelligent group of agents with appropriate functionality for the formation,
operation, transformation and dissolution of these groups. Note that it is not necessary
for every agent to have the same level of contribution to such self-organising ability;
agents may specialise in certain tasks and assume different roles in the lifecycle of
the emergent agent.
4 The Operator 4.0: Aiding for Enhanced Workers Capabilities
A capability is the “measure of the ability of an entity (e.g. department, organisation,
person, system) to achieve its objectives, especially in relation to its overall mission”
[23]. In the case of human beings, this involves having the resources and the ability
to deploy their capabilities for a purpose.
4.1 Automation Aiding for Enhanced Physical Capabilities
A physical activity is any bodily movement produced by skeletal muscles that requires
energy expenditure. We define physical capability as the operator’s capacity and ability
to undertake physical activities needed for daily work, and can be characterised by
multiple attributes, including the description of the physical function (e.g. ability to lift,
walk, manipulate and assemble) together with its non-functional properties (e.g. speed,
strength, precision and dexterity), as well as the description of the ability in terms of
maturity- and expertise- level. The agent’s activity supported is that of (physically)
acting, i.e. the ‘A’ in the OODA loop.
For example, the operator may be: (a) ‘procedure following - novice’ with no
autonomy over the details of the operation and under supervision along the whole
procedure, (b) ‘procedure following - advanced’ with limited operational autonomy
and less supervision across the procedure, or (c) ‘expert’ - featuring internalised tacit
knowledge (know-how) and autonomy towards improving the operation, where only
the operation’s outcome is supervised. The vision of Operator 4.0 acknowledges that
capabilities are not static, but they evolve over time, as well as change depending on
context (e.g. the operator may be tired or rested, new- or accustomed- to-the-task),
therefore physically aiding an Operator 4.0 assumes that one can assess the physical
capabilities in a dynamic and timely fashion, preferably in real-time. Some assessment
tools for testing an operator’s physical capabilities may include: (a) Physical Abilities
Tests (PATs) [24] [25] capable of matching the physical abilities of an operator with
the physical demands of a job (or operation) up-front to its allocation (e.g. such methods
are getting increased attention in the defence community); and (b) Advanced Trained
Classifiers (ATCs) [26], based on a variety of machine learning techniques, to measure
(test) in real-time the operator’s physical performance and dynamically identify when
an assisted/enhanced operation is necessary in an unobtrusive manner, relying on
physiological measures (cf. ergonomics [27]). This is done in order to actively determine
when an operator actually requires assistance and subsequently prompt the appropriate
type and level of physical (aided) capability to facilitate optimal physical performance
by the operator. Moreover, PATs may be useful for job role allocation and/or for
determining training needs (e.g. how to handle lifting, posture correction, etc.), while
ATCs may be advantageous for reducing the chances of accidents due to tiredness or
of injuries due to repetitive strain, or to improve product quality by reducing errors and
re-work.
4.2 Automation Aiding for Enhanced Sensing Capabilities
A sensorial capability is the operator’s capacity and ability to acquire data from the
environment, as a first step towards creating information necessary for orientation and
decision-making in the operator’s daily work [28]. There are two components to sensing:
(a) the physical ability to collect data from the environment (by vision, smell, sound,
touch, vibration), and (b) the ability to selectively perceive it (as we know that a very
low percentage of the data generated by the physical sense of an operator enters the
short-term memory and is made available for processing). It is known that an operator
is selectively filtering out what he/she does not consider important: “of the entire
amount of new information generated by our environment, our senses filter out >99%
of signals before they reach our consciousness” [29]. It is also known that this filtering
is not a conscious process. Therefore, OODA is not a simple loop; there is information
that flows to make an operator perceive selectively what his/her brain considers
important (i.e. what data are useful for analysis and decision-making). This selectivity
is acquired by the operator through learning. As a consequence, there are two points
where the operator’s sensing abilities are subject to assessment and where these abilities
may need improvement, as further described.
The first potential sensory improvement is the creation of new- or augmentation-
of existing senses (e.g. by way of using sensor devices to collect, convert, aggregate
signals that would not be accessible for the operator, either due to physical accessibility
of the data source, general human limitations, or due to individual personal limitations).
Also, due to the different levels of sensitivity of humans across senses, transforming
one signal to another form may increase the ability of the human to identify information
within the data (e.g. transforming temperature to visible colour, vibration to audible
spectrum sound, or using data aggregation, can enable the human to make use of
otherwise inaccessible data). The second type of sensory limitation is more difficult to
overcome if it is to be done exclusively at sensor level. This is because information
feedback produced by analysis (orientation) and decision-making must be used to filter
out unwanted data (i.e. containing irrelevant information) and to sensitise selective
perception to smaller signals, which may carry relevant information.
Some assessment tools for testing an operator’s sensorial capabilities may include:
(a) Sensorial Abilities Tests (SATs) [30] capable of matching the sensorial abilities of
an operator with the sensorial demands of a job (or operation) up-front to its allocation.
This is not a trivial tasks, because even though the sensorial abilities of an operator
can be tested (such as by using simple vision and hearing tests), sensing successfully in
the situation (i.e. registering/perceiving signals necessary for analysis and orientation)
is also dependent on the nature and level of prior experience of the operator as previously
explained.
It is therefore expected that the solution to selective perception deficits is not simply
providing operators with ‘bionic ears and eyes’ (even though in some situations that
may be sufficient), but in using the ‘emergent agent’ model, where the machine agent
has its own intelligence in terms of analysis and orientation, and the ability to reason
about the human agent’s needs and decision what data to present for the human’s needs
and when.
The traditional limitation for decision-making has been scarcity of information,
requiring human (and machine) agents to make decisions in light of insufficient data
about the operations. With the proliferation of sensor devices (the so-called ‘Internet of
Things’) this situation could change, but only if sensor agents are made intelligent
in terms of what data to register and transmit to other agents.
New algorithms are needed for cooperative and collaborative learning of situations
for collective sense-making and decision-making by sensor agents (including agent
networks). This is so that the situational knowledge base of participating agents can
be utilised to adaptively filter unwanted data and to ‘zoom-in’ to enhance faint but
relevant signals, as well as negotiate signal bandwidth for priority communication.
Part of this situation recognition may be implemented by machine learning techniques,
such as (b) Advanced Trained Classifiers (ATCs) [26], where part of an intelligent
sensor agent may use machine learning to support human-automation symbiosis and
to learn about the individual operator and that operator’s behaviour in action, to actively
determine when an operator actually requires assistance, and to subsequently prompt
the appropriate type and level of sensing (aided) capabilities to facilitate optimal
sensing performance by the operator.
4.3 Automation Aiding for Enhanced Cognitive Capabilities
A cognitive capability is the operator’s capacity and ability to undertake the mental
tasks (e.g. perception, memory, reasoning, decision, motor response, etc.) needed for
the job and under certain operational settings [31]. In the OODA model, these cognitive
tasks are to ‘Orient’ and to ‘Decide’, together amounting to a mental workload, decision-
making, skilled performance, human-computer interaction, maintaining reliability in
performance, dealing with work stress whether in training or in the job.
As the Factories of the Future become increasingly dynamic working environments
(cf. Industry 4.0) due to the upsurge in the need for flexibility and adaptability of
production systems, the upgraded shop-floors (cf. Factory 4.0) call for cognitive aids
that help the operator perform these mental tasks, such as those provided by augmented
reality (AR) technologies or ‘intelligent’ Human-Machine Interfaces (HMI) to support
the new/increased cognitive workload (e.g. diagnosis, situational awareness, decision-
making, planning, etc.) of the Operator 4.0. It can be expected that this aid would
increase human reliability in the job, considering both the operator’s well-being and
the production system’s performance.
Some assessment tools for testing an operator’s cognitive capabilities may include:
(a) Cognitive Abilities Tests (CATs) [32] capable of matching the cognitive abilities of
an operator with the mental demands and cognitive skills needed for performing a job
(or operation) up-front to its allocation; and (b) Advanced Trained Classifiers (ATCs)
[26] based on various machine learning techniques, to measure (test) in real-time
the operator’s cognitive performance and dynamically identify when an assisted/
enhanced action is necessary, and do so in an unobtrusive manner, relying on cognitive
load measurements (cf. cognitive ergonomics [33]).
5 Conclusions and Further Work
Industry 4.0 would be inconceivable without human beings. Hence, human-automation
symbiosis by means of H-CPS and AA aims to take into account established principles
of the design of operator-friendly working conditions [34] for aiding the workforce [35],
such as: (a) practicability, considering compliance with ‘anthropometric’ and physical,
sensorial and cognitive norms in the design of a work system; (b) safety, bearing in
mind in the design of work systems embedded security and safety measures to avoid
accidents; (c) freedom from impairment, by providing automation-aided means to
compensate various individual (human) limitations and thus keep with the physical,
sensorial and cognitive quality performance of the job; and (d) individualisation and
personalisation of the working environment thanks to adaptive systems (cf. AA) that
support the operator as an individual and promote learning (e.g. by means of sharing
and trading of control strategy [14]).
The development of ‘human-automation symbiosis’ in work systems [6] [36] offers
advantages for the social sustainability of the manufacturing workforce in Industry
4.0, in terms of improving operational excellence, safety and health, satisfaction and
motivation, inclusiveness, and continuous learning. Hence, the purpose of H-CPS
and AA in this research is to support the Operator 4.0 to excel in the job by means of
automation-aided systems that aim to provide a sustainable relief of physical and
mental stress and contribute to the development of workforce creativity, innovation and
improvisational skills, without compromising production objectives.
Further work aims to explore ‘intelligent’ human-machine interfaces and interaction
technologies, and adaptive and human-in-the-loop (HITL) control systems to support
the development of ‘human-automation symbiosis’ work systems for the Operator 4.0
in the Factory of the Future.
References
1. BCG Group: Report on Man and Machine in Industry 4.0 (2015)
2. European Factories of the Future Research Association (EFFRA) Roadmap 2020
3. Tzafestas, S.: Concerning Human-Automation Symbiosis in the Society and the Nature.
Int’l. J. of Factory Automation, Robotics and Soft Computing, 1(3):6-24 (2006)
4. Camarinha-matos L.M., Rabelo R. and Ósorio, L.: Balanced Automation. Computer-
Assisted Management and Control of Manufacturing Systems, Springer-Verlag, pp. 376-
413 (1996)
5. Kovács, I. and Brandão-Moniz, A.: Issues on the Anthropocentric Production Systems,
Balanced Automation Systems: Architectures and Design Methods, pp. 131-140 (1995)
6. Romero, D., Noran, O., Stahre, J., Bernus, P., Fast-Berglund, Å.: Towards a Human-Centred
Reference Architecture for Next Generation Balanced Automation Systems: Human-
Automation Symbiosis, Advances in Production Management Systems, 460:556-566 (2015)
7. Hancock, P.A., Chignell, M.H.: Adaptive Control in Human-Machine Systems. Human
Factors Psychology, pp. 305-345, North Holland: Elsevier Science Publishers (1987).
8. Hancock, P.A., Jagacinski, R.J., Parasuraman, R. et al.: Human-Automation Interaction
Research: Past, Present and Future. Ergonomics in Design, 21(2):9-14 (2013)
9. Sheridan, T., Parasuraman, R.: Human-Automation Interaction. Human Factors and
Ergonomics, 1(1):89-129 (2006)
10. Kidd, P.: Organisation, People and Technology in European Manufacturing, CEC, FAST,
Brussels, Belgium (1992)
11. Lehner, F.: Anthropocentric Production Systems: The European Response to Advanced
Manufacturing and Globalization, CEC, Brussels, Belgium (1992)
12. Kay M.: Adaptive Automation Accelerates Process Development. Bioprocess International
4(4):70-78 (2006)
13. Calefato, C., Montanari, R., Tesauri, F.: The Adaptive Automation Design. Human
Computer Interaction: New Developments, InTech, pp. 141-154 (2008)
14. Inagaki, T.: Adaptive Automation: Sharing and Trading of Control. Chapter 8 - Handbook
of Cognitive Task Design, pp. 147-169 (2003)
15. Munir, S., Stankovic, J.A., Liang, C-J.M., Lin, S.: Cyber-Physical System Challenges for
Human-in-the-Loop Control. Feedback Computing (2013)
16. Kasabov, N.: Introduction: Hybrid Intelligent Adaptive Systems. Int’l. J. of Intelligent
Systems, 6:453-454 (1998)
17. Boyd, J.R.: The Essence of Winning and Losing. www.dnipogo.org (1996)
18. Osinga, F.P.B. Science, Strategy and War: The Strategic Theory of John Boyd. Delft:
Eburon Academic Publishers (2005)
19. Willson, G.F., Russel, C.A.: Performance Enhancement in a UAV Task Using Psycho-
physiologically determined Adaptive Aiding. Human Factors, 49(6): 1005-1019 (2007)
20. Endsley, M.R.: Towards a Theory of Situation Awareness in Dynamic Systems. Human
Factors, 37(1):32-64 (1995)
21. Picard, R.W.: Affective Computing, MIT Press, Cambridge, (1997)
22. Ricci, N., Fitzgerald, M., Ross, A.M., Rhodes, D.H. Architecting Systems of Systems with
Ilities: An Overview of the SAI Method. Conf. on Systems Engineering Research (2014)
23. Capability (General) Business Dictionary. http://www.businessdictionary.com (2016)
24. Committee on Measuring Human Capabilities: Measuring Human Capabilities: An Agenda
for Basic Research on the Assessment of Individual and Group Performance Potential for
Military Accession (2015)
25. Campion, M. A.: Personnel Selection for Physically Demanding Jobs: Review and
Recommendations. Personnel Psychology, 36:527-550 (1983)
26. Woźniak, M., Graña, M., Corchado, E.: A Survey of Multiple Classifier Systems as Hybrid
Systems. Information Fusion, 16: 3-17 (2014)
27. Lee, J.D., Seppelt, B.D.: Human Factors and Ergonomics in Automation Design.
In Handbook of Human factors and Ergonomics, Hoboken: Wiley. Sensory activity
pp. 1615-1642 (2012)
28. Attwood, D., Deeb, J., Danz-Reece, M.: Personal Actors, in Tooley, M (Ed.) Design
Engineering Manual. Chapter 6.1. pp. 234-247 (2010)
29. Simon, H.: Artificial Intelligence as a Framework for Understanding Intuition. J. of
Economic Psychology, 24:265-277 (2003)
30. Stone, H., Bleibaum, R., Thomas, H.A.: Sensory Evaluation Practices. 4th Ed. Amsterdam:
Elsevier (2012)
31. Carrol, J.B.: Human Cognitive Abilities. Cambridge: Cambridge Uni. Press (1993)
32. Hutton R.J.B., Militello, L.G. Applied Cognitive Task Analysis (ACTA): A Practitioner’s
Toolkit for Understanding Cognitive Task Demands. Ergonomics 41(11):1618-1641 (1998)
33. Falzon, P., Gaines, B.R., Monk, A.F.: Cognitive Ergonomics: Understanding, Learning, and
Designing Human-Computer Interaction, Academic Press (1990)
34. Hacker, W.: Allgemeine Arbeitspsychologie. Psychische Regulation von Wissens, Denk
und körperlicher Arbeit. Bern: Verlag Hans Huber, 2nd Ed. (2005)
35. Bailey, R.W.: Human Performance Engineering. 2nd Ed., London: Int’l. Prentice-Hall
(1996)
36. Kaber, D.B., Riley, J.M., Tan, K., Endsley, M.R.: On the Design of Adaptive Automation
for Complex Systems. Int’l. J. of Cognitive Ergonomics, 5(1):37-57 (2001)