Available via license: CC BY 4.0
Content may be subject to copyright.
applied
sciences
Article
Situational Awareness and Problems of Its Formation in the
Tasks of UAV Behavior Control
Dmitry M. Igonin, Pavel A. Kolganov and Yury V. Tiumentsev *
Citation: Igonin, D.M.;
Kolganov, P.A.; Tiumentsev, Y.V.
Situational Awareness and Problems
of Its Formation in the Tasks of UAV
Behavior Control. Appl. Sci. 2021,11,
11611. https://doi.org/10.3390/
app112411611
Academic Editor: Seong-Ik Han
Received: 25 October 2021
Accepted: 2 December 2021
Published: 7 December 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Department of Flight Dynamics and Control, Moscow Aviation Institute (National Research University),
Volokolamskoe Highway 4, 125080 Moscow, Russia; dimatopius@gmail.com (D.M.I.);
kolganov.pa@gmail.com (P.A.K.)
*Correspondence: yutium@gmail.com
Abstract:
Situational awareness formation is one of the most critical elements in solving the problem
of UAV behavior control. It aims to provide information support for UAV behavior control according
to its objectives and tasks to be completed. We consider the UAV to be a type of controlled dynamic
system. The article shows the place of UAVs in the hierarchy of dynamic systems. We introduce the
concepts of UAV behavior and activity and formulate requirements for algorithms for controlling
UAV behavior. We propose the concept of situational awareness as applied to the problem of behavior
control of highly autonomous UAVs (HA-UAVs) and analyze the levels and types of this situational
awareness. We show the specifics of situational awareness formation for UAVs and analyze its
differences from situational awareness for manned aviation and remotely piloted UAVs. We propose
the concept of situational awareness as applied to the problem of UAV behavior control and analyze
the levels and types of this situational awareness. We highlight and discuss in more detail two
crucial elements of situational awareness for HA-UAVs. The first of them is related to the analysis
and prediction of the behavior of objects in the vicinity of the HA-UAV. The general considerations
involved in solving this problem, including the problem of analyzing the group behavior of such
objects, are discussed. As an illustrative example, the solution to the problem of tracking an aircraft
maneuvering in the vicinity of a HA-UAV is given. The second element of situational awareness is
related to the processing of visual information, which is one of the primary sources of situational
awareness formation required for the operation of the HA-UAV control system. As an example here,
we consider solving the problem of semantic segmentation of images processed when selecting a
landing site for the HA-UAV in unfamiliar terrain. Both of these problems are solved using machine
learning methods and tools. In the field of situational awareness for HA-UAVs, there are several
problems that need to be solved. We formulate some of these problems and briefly describe them.
Keywords:
UAV behavior control; situational awareness; behavior analysis for environmental objects;
semantic image segmentation; machine learning
1. Introduction
One of the challenges for modern information technologies is the problem of behavior
control for robotic UAVs. Behavior control systems should enable UAVs to solve compli-
cated mission tasks under uncertainty conditions [
1
–
3
], with minimal human involvement;
i.e., they should have a high level of autonomy.
However, at present, a considerable number of modern UAVs belong to the class of
remotely piloted aircraft; i.e., they do not have the required level of autonomy in flight. In
addition, tools used to provide navigation information that are external to the UAV, such
as satellite navigation tools, are widely used. This approach makes the UAV dependent on
external sources of information and control signals, which in some cases makes it difficult
or even impossible to perform the missions.
For this reason, an alternative approach to UAV behavior control is required, based on
the autonomous solving of the problem of making control actions adequate for the control
Appl. Sci. 2021,11, 11611. https://doi.org/10.3390/app112411611 https://www.mdpi.com/journal/applsci
Appl. Sci. 2021,11, 11611 2 of 30
objectives and the current situation in which the UAV operates. Hence, it follows that a
critical part of the UAV behavior control problem is to obtain an evaluation of the current
situation, i.e., to get the information about it that is required to support decision-making
processes in the control system. This kind of information will be called situational awareness
herein. Situational awareness is one of the most important topics for both manned and
unmanned aviation, and for other types of controllable systems.
Studies related to the concept of situational awareness in its traditional interpretation
represent one of the essential branches of human factor engineering (ergonomics). The
purpose of such research is to provide information support for human operator activity
as an active element of some controlled system (pilot of an aircraft, driver of a car or
other land vehicle, ship driver, etc.). Such information support should be organized in
such a way as to ensure effective human operator activity, taking into account the require-
ments related to the safe operation of the respective control systems. Many significant
results have been obtained in this field, which has been actively developed in the last
30–35 years. These results refer to the general theoretical issues of situational awareness
formation [
4
–
31
] and the application of relevant ideas and methods for solving problems
related to various classes of control systems that include humans in their control loops. Spe-
cific classes of such systems include, but are not limited to,
aircraft [4,5,24,25,27–29,32–51]
,
ships [
14
,
23
], and ground vehicles [
10
,
11
,
13
,
21
,
22
]. In all these cases, the problem of sit-
uational awareness is interpreted as the need to provide system control processes with
information that makes it possible to produce sufficiently effective decisions. In addition, it
is necessary to reduce the risk of making incorrect decisions, which is especially important
when it comes to the safe operation of the systems involved, e.g., aircraft.
As for aircraft, the vast majority of publications on situational awareness for them re-
late to manned aviation. Only a relatively small number of these publications are concerned
with unmanned aerial vehicles [34–56].
The key point in solving the problem of situational awareness for manned aviation is
that the human operator is an active element of the control system, perceiving the current
situation and forming control actions based on the results of this perception. It is assumed
that the human operator is able to perceive information in various forms. In particular,
visual data play an essential role—for example, what the pilot sees through the aircraft
canopy, and through the head-up display or with the help of other display devices. A
radical difference in the case of autonomous UAVs is that much deeper processing of
incoming information is required. Where it is sufficient for a human operator to “show a
picture” in an autonomous UAV, the tasks of pattern recognition and scene analysis must
be solved. At the same time, it is necessary to manage the attention of the perceptual
system according to the goals of the flight operation performed. Attention management
makes it possible to identify meaningful elements of situational awareness and eliminate
insignificant ones. It should be emphasized that this will be only the first stage of the
mentioned deep processing of incoming data. Next, it will be necessary to identify the
relationships between the scene objects, both static and dynamic, to estimate the evolution
of the observed situation, etc. For a highly autonomous UAV, all these actions have to be
performed exclusively by the onboard equipment of the UAV, with minimal or no human
involvement.
As noted above, relatively few works have been devoted to the problem of situational
awareness for UAVs compared to manned aviation. Moreover, among these publications,
the overwhelming majority are related to UAVs. This is a significant circumstance because,
in principle, the situational awareness problem for UAVs differs little from the same
problem for human-crewed aircraft. In both of these cases, there is a human operator in
the control loop of the aircraft. The only difference is that less information is available to
the UAV operator due to the limited bandwidth of the communication channel between
the operator and the UAV. However, just like an aircraft pilot, the UAV operator “sees a
picture”—only now it is not a view through the aircraft canopy, but images from one or
more video cameras with which the UAV is equipped.
Appl. Sci. 2021,11, 11611 3 of 30
There have been very few papers on situational awareness for highly autonomous UAVs,
as opposed to crewed aircraft and remotely piloted UAVs—
see [34–36,39–43,47,49,51–53,56]
.
They focus mainly on the formation of individual elements that make up situational awareness.
One of the critical tasks that has to be solved when controlling the behavior of an au-
tonomous UAV is the formation of situational awareness of the threats to it. Such problems
are solved in [
34
,
36
]. In particular, the article [
34
] considers the flight of an autonomous
UAV over hilly terrain. The data obtained from the radar are used to prevent collision with
an obstacle. An appropriate element of situational awareness formed based on these data
allows predicting changes in the terrain and performing an evasive maneuver to avoid the
identified obstacle. In [
36
], a similar problem is solved for a broader class of obstacle objects
using visual data and based on reasoning about the relationships between the objects of
the observed scene. Tasks similar in nature are also considered in [
39
,
40
,
42
,
43
], where
possible means of forming situational awareness elements concerning various objects on
the Earth’s surface are shown. Several studies devoted to forming situational awareness
for autonomous UAVs demonstrate the high efficiency of machine learning technologies for
solving this problem. In particular, the article [
35
] considers a quadcopter designed to per-
form search and rescue operations. Among the tasks solved by its onboard equipment is the
formation of situational awareness, based on visual data processing to assess the surround-
ing environment. In addition, the building of the situational awareness element, which
evaluates the UAV’s state based on the data from different sensors (multi-sensor fusion), is
considered. The capabilities of deep learning technologies and reinforcement learning to
solve the problems arising in situational awareness formation for an autonomous UAV are
shown. A similar problem is addressed in [
49
], where the main goal is to detect and locate
people and recognize their actions in near real-time. That method could play a crucial
role in preparing an effective response during rescue operations. Another task related
to rescue operations is discussed in [
52
]. Although not directly related to UAVs, it pro-
vides an example of how situational awareness in a complex emergency can be generated
using convolutional neural networks of several types. This approach can be very useful
for autonomous UAVs involved in search and rescue operations as well. Deep learning
technologies are a powerful tool for forming situational awareness elements, primarily
based on the processing of visual data from video surveillance systems. An example of the
use of this technology is contained in [
53
]. Therein, an autonomous UAV is used to address
the detection, localization, and classification of elements of the ground environment.
In addition to controlling the behavior of individual autonomous UAVs, the task of
their group behavior is also essential. Accordingly, the problem of forming situational
awareness that provides a solution to this problem also arises. One possible option is
considered in [
47
], which describes a method for autonomous planning of UAV swarm
missions. This planning essentially depends on the current situation in which the mission
is carried out. As the situation changes, e.g., as the target moves, the planning for swarm
behavior is updated. One valuable and effective tool for solving UAV swarm behavior
control tasks is multi-agent technology. The article [
56
] gives an example of situational
awareness generation for multi-agent systems. Although this example is not directly related
to UAVs, the proposed approach can help solve group behavior problems of autonomous
UAVs.
The problem of formalizing the concept of situational awareness is considered in [
41
].
Those authors proposed a statistical model for quantifying situational awareness as applied
to the operation of small civilian autonomous UAVs. Our approach is based on other
assumptions. Namely, we interpret autonomous UAVs as a class of controlled dynamic
systems. It seems exciting and promising to synthesize these two approaches to obtain
a formal model of situational awareness suitable for solving a wide range of problems
related to the behavior control of autonomous UAVs.
The results mentioned above are associated with forming individual elements of
situational awareness concerning autonomous UAVs. They do not allow one to consider the
problem as a whole, although there are separate works that consider situational awareness
Appl. Sci. 2021,11, 11611 4 of 30
for autonomous UAVs from a more general [
41
,
56
] perspective. At the same time, in
our opinion, the problem of such complexity as the formation of situational awareness
for “smart” UAVs cannot be solved without considering this problem as a whole from
as general a position as possible. The results of such considerations should provide a
framework for analyzing the problem of situational awareness as a whole and identifying
the required elements of situational awareness and their interaction.
In this article, we consider the problem of situational awareness for “smart” UAVs in
general and develop an approach to solving this problem.
When considering the current situation for the UAV, the problem of its estimation
contains two parts. The first of them is the evaluation of values for the variables describing
the state of the UAV (i.e., the values describing the trajectory and angular motion of the
UAV), and the values characterizing the “health state” of the UAV. The solution to this
problem is related to the use of methods from such scientific and engineering disciplines
as state estimation [
57
,
58
] and identification [
59
–
66
], and diagnostics [
67
–
71
]. The second
part of the mentioned problem is to identify the so-called background-target environment
in which the UAV operates. In this case, under the background-target environment, we
understand the set of observed objects, backgrounds, disturbances, and propagation media
for signals (electromagnetic, sound, etc.).
The task of background-target environment analysis can be solved both in static and
dynamic ways. In the static case, this problem can be reduced to the study of revealing and
recognizing objects in a specific area of space surrounding the UAV. In a dynamic variant,
we are interested in the nature of these objects, and this predicting their behavior, which is
required for more effective decision-making by the behavior control system of the UAV.
Both of these types of problems have to be solved, as a rule, in the presence of a variety of
disturbances of natural and/or artificial origins. The content of this article is mainly related
to the second part of the situation assessment problem mentioned above. The solution to
this problem should provide source data for the UAV’s behavior control system.
The purpose of our paper is to give a reasonably general formulation of the situational
awareness problem for highly autonomous UAVs, which could serve as a basis for struc-
turing this problem. We use this structuring to outline the directions of further research
required to solve the situational awareness problem concerning HA-UAVs. In doing so,
we interpret the UAV as a kind of controlled dynamic system. In Section 2, we present a
hierarchy of such systems, based on the concepts we introduced in the monograph [
72
],
and show the place of the HA-UAV in this hierarchy. Then we introduce the theory of
the behavior and activity of HA-UAVs as controlled dynamic systems, and the theory for
controlling their behavior. As an extension of this topic, we formulate the requirements
for algorithms for controlling the behavior of HA-UAVs. One of the critical elements of
the behavior control problem is the information support of the corresponding formation
and decision-making processes, providing the achievement of control goals. This support
consists of assessing the current situation and forming the situational awareness required
for the HA-UAV behavior control algorithms. In this regard, in Section 3, we propose
formulations for the concepts of the situation and situational awareness that take into
account the specifics of the HA-UAV and analyze possible approaches to forming elements
of situational awareness. Then, in Section 4, we analyze the levels and types of situational
awareness concerning the specifics of the HA-UAV. In Section 5, we consider the problem of
forming elements of situational awareness for HA-UAVs. There we also discuss examples
of situational awareness elements relevant to solving the behavior control problem for
highly autonomous UAVs, along with approaches to the formation of these elements. There
are several problems to be solved in the area of situational awareness for HA-UAVs. In
Section 6, we discuss some of them.
Appl. Sci. 2021,11, 11611 5 of 30
2. UAV as a Controllable Dynamic System Operating under Uncertainty Conditions
Let us interpret the UAV as a controllable dynamic system
S
acting under uncertainty
conditions. Let us consider possible types of such systems and their relations to the problem
of controlling the UAV behavior.
2.1. Kinds of Controllable Dynamic Systems
The state of a dynamic system
S
changes over time under the influence of some
external and/or internal factors [
73
–
75
]. The set of external factors characterizes the impact
on the system
S
of the environment
E
in which the system operates. Internal factors
characterize the system
S
in terms of its structure, characteristics, and behavioral features.
Internal factors include, in particular, events affecting the properties of the
S
system, such
as failures of its equipment, structural damage, and impacts on controls that affect the
behavior of the system.
Obviously, in terms of their potential capabilities, different variants of dynamic sys-
tems will differ significantly from each other. In [
72
,
76
], a hierarchy of types of such systems
has been introduced. Its detailed description can be found in these books. This section will
briefly discuss the essence of this hierarchy and show the place of UAVs in this hierarchy.
As shown in [72,76], the system Sin its general form can be defined as follows:
S=hXS,ΦS,WSi,
XS={XS
i},i=1, . . . , NS;ΦS={ΦS
j},j=1, . . . , NR;
WS=hTS,ESi,TS⊆T.
(1)
In this definition,
XS={XS
i}
,
i=
1,
. . .
,
NS
represents the set of structures of the
system
S
. The structures
XS
i
that are part of this set specify the values
xi
describing
the state of the system
S
, the regions of definition of these values
Xi
, and the relations
between them. Here
XS⊆X
is the area of the acceptable values of the system states (1),
x1∈X1
,
. . .
,
xn∈Xn
,
X=X1×. . . ×Xn
. The element
ΦS={ΦS
j}
,
j=
1,
. . .
,
NR
of the
definition (1), is the set of rules (transformations, algorithms, procedures) that specify the
evolution of the system state
S
. In addition, the definition (1) includes such an element as
WS=hTS
,
ESi
. It can be interpreted as the clock of the
S
system. In the definition of
WS
there are two sets of moments of time, namely, system time
TS
(the set of moments of time
of system operation) and world time
T
as the set of all possible moments of time. In this
definition there is also an element
ES
representing the mechanism of system activity
S
, i.e.,
a kind of clock generator.
The system
S
does not exist in isolation, but in interaction with the environment
E
—i.e.,
SE
. This situation means that the environment
E
influences the system
S
, which
response to these influences and, in turn, also influences the environment. Depending on
the properties of the system
S
, and the nature of its interactions with the environment
E
, the
following hierarchy of types of dynamic systems, ordered by increasing their capabilities
and, accordingly, complexity, can be constructed:
S=DS ⊂VS ⊂CS ⊂AS ⊂IS. (2)
The interaction of
S
with the environment
E
can be either passive or active. Examples
of passive interaction
SE
give the motion of an artillery shell or an unguided missile
under the influence of gravity and aerodynamic forces. Systems of this kind are unguided.
In the hierarchy (2), they include systems of the DS and VS classes.
Systems of
DS
class (deterministic system) react to the same influences in the same
way; i.e., there is no influence of random factors on their behavior. The definition (1) for
the case of DS takes the following form:
DS =hXDS,ΦDS,TDS i,
XDS ⊆X,TDS ⊆T,ΦDS =ΦDS(x,t),x∈XDS,t∈TDS.(3)
Appl. Sci. 2021,11, 11611 6 of 30
As a rule, in many real-world problems, it is necessary to take into account various
kinds of uncertainties. In particular, in a more realistic formulation, the above-mentioned
problems of modeling the flight of an artillery shell or an unguided missile would be
more correctly solved by taking into account the random influences of the air environment
(atmospheric turbulence, wind gusts, etc.). The
VS
(vague system) class systems, which in
general terms can be defined as follows, corresponding to this case:
VS =hXVS ,TVS ,ΦV Si,
XVS ⊆X,TV S ⊆T,ΦVS =ΦV S (x,ξ,t),
x∈XVS ,ξ∈Ξ,t∈TVS ⊆T.
(4)
In addition to the elements that were already in
DS
, the definition (4) also includes a
set of uncertainty factors
ξ∈Ξ
. These factors are uncontrollable external influences on the
system
VS
. For these impacts, there is no complete a priori information about their nature.
The components of the vector ξmay, in particular, be random or fuzzy values.
The next three classes of systems, i.e.,
CS
,
AS
, and
IS
, differ from
DS
and
VS
by active
interaction with the environment
E
because they are controlled. The presence of tools for
controlling the system’s behavior allows it to actively respond to the environment’s influ-
ences. This response is carried out through the formation and implementation of actions
on the system’s controls, for example, the deflection of the aircraft’s control surfaces.
The nature of the interaction
CS E
depends on the properties of the system
S
and
on the properties of the environment
E
. In addition, this interaction also depends on the
resources
S
has at its disposal. These resources can be internal and external concerning the
considered system. Examples of internal resources include a digital terrain map, inertial
navigation, computer vision, sensors for various purposes, etc. Similarly, external resources
can be GLONASS and GPS navigation systems, radio navigation facilities, remote control
facilities, etc.
The level of dependence of the system
S
on external resources is an essential indicator
of its technical perfection. Next, the problem of creating highly autonomous systems will
be considered in the example of “smart” UAVs, as it generates the need to form situational
awareness, analyzed in our article.
As noted above, the active response to the influence of the environment
E
is imple-
mented by systems of classes
CS
,
AS
, and
IS
. The first of them, the system
CS
(controllable
system), can counteract external perturbations from
E
within some limits. This system can
be described as follows:
CS =hXCS,TCS,ΦCS,i,
XCS ⊆X,TCS ⊆T,ΦCS =ΦCS(x,u,ξ,t),
x∈XCS,u∈UCS,ξ∈Ξ,t∈TCS ⊆T.
(5)
The definition (5) differs from (4) in that the rule
ΦCS
now depends not only on the
state
x∈XCS
, the uncontrolled influence of
ξ∈Ξ
and time
t∈TCS
, but also from the
control actions u∈UCS.
Thus, the system of class
CS
is a dynamic controllable system, reacting (by forming
control actions) to the influences of the environment
E
according to the control objectives,
which guided the designer of the system. In a more general case, a system of class
CS
can be influenced by various kinds of uncertain factors
ξ∈Ξ
. The system of this class
is characterized by the fact that, unlike the systems of the next two hierarchical levels
(adaptive and intelligent systems), the control objective is external to the system and is
taken into account only at the stage of synthesis of the control law for it.
Any control process is purposeful. In this case, for a system of
CS
class is characterized
by the following circumstance. The goals of control were guided by the designer of the
control law for
CS
, and these goals are external with respect to the system; i.e., they are not
included in its resources. The implication of this is that although the system
CS
compared
Appl. Sci. 2021,11, 11611 7 of 30
to the systems of classes
DS
and
VS
can counteract perturbing influences actively, but these
capabilities are limited. This situation is explained by the fact that
ΦCS
, i.e., the control law
of the system
CS
, cannot be corrected, even if it were necessary. Such a necessity may be
due to a change for some reason in the properties of the controlled object, i.e., a change
in the type of structure
XCS
. The control law
ΦCS
synthesized for
CS
with respect to the
original variant XCS may become inadequate for the modified variant XCS .
With minor differences between the original and modified versions of
XCS
, i.e., with a
low level of uncertainty
ξ∈Ξ
, the robustness of the control law
ΦCS
may be sufficient to
control
CS
with the required quality. If these differences are significant, a correction for
ΦCS
is required to adjust to the changed version of
CS
, and the needed corrections must
be made promptly, directly in the process of system operation. The
AS
(adaptive system)
systems have this property. In general, they can be represented as follows:
AS =hXAS,TAS,ΦAS,ΨAS,ΓAS i,
XAS ⊆X,TAS ⊆T,ΦAS =ΦAS(x,u,ξ,t),ΨAS =ΨAS(γ,t),
ΓAS ⊆Γ,x∈XAS ,u∈UAS,ξ∈Ξ,γ∈Γ,t∈TAS ⊆T.
(6)
The definition (6) differs from (5) in that it introduces
ΨAS
rule, which is an adaptation
mechanism that provides adjustment of the control law
ΦAS
. In addition, (6) also contains
an element
ΓAS
, which is a set of goals for the system
AS
and guides the adaptation
mechanism ΨAS.
Thus, the inclusion of the elements
ΨAS
and
ΓAS
in the system
AS
provides it with the
ability to promptly modify its behavior to counteract changes in the properties of both the
system
S
and the environment
E
. However, the possibility of such modification is limited
by the fact that the adaptation mechanism
ΨAS
is guided in its actions by a predefined
and fixed set of goals
ΓAS
. It follows from this fact that the adaptation mechanism will
successfully perform its tasks as long as the
ΓAS
goals remain unchanged. However, when
the
S
system operates in situations with high levels of heterogeneous uncertainties, it
may be necessary to involve a higher level of adaptation, i.e., adaptation of [
72
,
76
] goals.
However, the
AS
class systems do not have the resources to adjust existing goals and
generate new ones; i.e., they lack a goal-setting mechanism. If we add a goal generation
mechanism to
AS
, then we move to the systems of
IS
class (intelligent system), which in
general terms can be represented as follows:
IS =hXIS ,T,ΦIS ,ΨI S,ΓI S,ΩIS ,ΣISi,
XIS ⊆X,TI S ⊆T,ΦIS =ΦI S (x,u,ξ,t),ΨIS =ΨI S (γ,t),
ΓIS ⊆Γ,ΩI S =ΩIS (x,u,ξ,γ,t),
x∈XIS ,u∈UIS ,ξ∈Ξ,γ∈Γ,ΣI S ⊆Σ,t∈TIS ⊆T.
(7)
In the definition (7) compared to (6), an element
ΩIS
was added, which represents the
rule of goal generation
γ∈Γ
, more precisely, ways to adjust existing goals and generate
new ones. In addition, another structure
ΣIS
is introduced, which provides the rule
ΩIS
with information to implement the goal-setting process.
2.2. System Behavior and Activity
According to the definition (1), the current state of the system
S
is described by the set
of values xi∈Xithat characterize it in the problem being solved:
x= (x1,x2, . . . , xn),xi∈Xi,i=1, 2, . . . , n. (8)
at the same time,
x∈X,X⊆X1×X2×. . . ×Xn.
Appl. Sci. 2021,11, 11611 8 of 30
Which values
xi
will belong to the set (8) depends on the nature of the system in considera-
tion and the task for which the system is intended.
For the case of continuous time
t∈T= [t0
,
tf]
and a finite dimensional vector of state
x∈X⊆ Rnthe set of states at all moments can be described as follows:
x(t) = (x1(t),x2(t), . . . , xn(t)) = [x1(t)x2(t). . . xn(t)]T. (9)
For the case of discrete time
t∈ {ti}
,
i=
1,
. . .
,
N
instead of continuous trajectories
(9) in the state space, we should consider their discretized versions of the following form:
x(ti) = {(x1(ti),x2(ti), . . . , xn(ti))},i=0, 1, . . . , N. (10)
For (10), the behavior of the system
S
will be the sequence of its states
x(ti)∈X
, which
are tied to the corresponding time moments ti∈T; i.e.,
{hx(ti),tii},ti∈[t0,tf]⊆T,i=0, 1, . . . , N. (11)
Let us also introduce the concept of activity of the system
S
, which is a sequence of
actions taken by this system. Each of the actions is based on information about the current
situation, i.e., about situational awareness, and about the current goals of the system
S
operation. The implemented action leads to some results. Symbolically, each step of the
process called activity can be represented as follows:
hsituation, goali ⇒ action ⇒result (12)
The concept of a situation is naturally divided into two components. The first of them
will be called the internal situation and denoted as
Λint (RW
,
t)
. Its components characterize
the current state of the system
S
, while the components of the external situation
Λext (RW
,
t)
are related to the fragment of objective reality
RW
, from which the system
S
in question is
excluded:
λ(t) = hλext (t),λint(t)i;λ(t)∈Λ,λe xt (t)∈Λext,λint(t)∈Λint ;Λ=Λint ∪Λe xt . (13)
λ(t)
,
λext (t)
, and
λint (t)
are the components of the total
Λ
, external
Λext
, and internal
Λint
situation, respectively.
Using (13), the relationship (12) can be rewritten as
{hλ(ti),γ(ti)i} ΦS
−→ λ(ti+1),i=0, 1, 2, . . . , N, (14)
or in a different way,
λ(ti+1) = ΦS(hλ(ti),γ(ti)),i=0, 1, 2, . . . , N.
λ(ti)∈Λ
is the current situation,
γ(ti)∈Γ
is the current target, and
ΦS
is the law of system
evolution
S
. As it is easy to see, the “result” in (12), attached to time marks, is nothing but
the behavior of the system; that is, the behavior of the system
S
is the consequence of its
activity. For the subject of this article, it would be more accurate to say “controlling the
activity and behavior of the system
S
.” Still, we use the shorter version “behavior control”
everywhere. Such usage is justified by the fact that we are interested not so much in the
activity of the system
S
as in the result to which this activity causes, i.e., the behavior of the
system S.
As noted above, along with the concept of situation, the idea of situational aware-
ness plays an important role. If the concept of situation characterizes a fragment
RW
of
objective reality (system + environment), the concept of situational awareness determines
the degree of awareness of the system
S
about this reality, i.e., about the current (instanta-
neous) situation. It shows the composition of the data available in the
S
system for making
control decisions. Situational awareness regarding a part of the situation components
Appl. Sci. 2021,11, 11611 9 of 30
is usually incomplete (their values are known inaccurately) or zero (their values are un-
known). It is provided for one part of components by direct observation (measurement)
and another piece by algorithmic, i.e., by calculating their values using the known values
of other elements.
Thus, when we speak about a system
S
with uncertainties, we are talking about
incomplete situational awareness for it. This statement means that the values of some
internal and external components of the situation for
S
required to control its behavior are
unknown or are not known precisely.
In a more general case, the action implemented by the system
S
may not only depend
on the current values of the situation
λ(ti)
and the goal
γ(ti)
. The next step (12), which
provides the transition to the situation
λ(ti+1)
, may take into account, besides the current
situation
λ(ti)
and the current goal
γ(ti)
, past values (prehistory) and possible future
values (forecast) for both situation and goals. Taking this fact into account, the relation (14)
can be represented as:
hΛ(ti),Γ(ti)iΦS
−→ λ(ti+1),i=0, 1, 2, . . . , N;
Λ(ti)⊂ΛS,Γ(ti)⊂ΓS.
(15)
where
Λ(ti)
is the set of situations and
Γ(ti)
is the set of goals considered at a given time
ti
.
2.3. Requirements for UAV Behavior Control Algorithms
The
ΦS
transformation, which is part of the relationships (14) and (15), is a set of
algorithms that control the behavior of the UAV. This set, in various combinations, may
include algorithms that solve the following problems:
•
planning a flight operation, controlling its execution, promptly adjusting the plan
when the situation changes;
•
control of UAV motion, including its trajectory motion (including guidance and
navigation) and angular motion;
•
control of target tasks (surveillance and reconnaissance equipment operation control,
weapons application control, assembly operations control, etc.);
•
control of interaction with other aircraft, both unmanned and manned, when perform-
ing the task of a group of aircraft;
• control of structural dynamics (for “smart” structures);
•
monitoring the “health state” of the UAV and implementation of actions to restore it if
necessary (monitoring the state of the UAV structure and its onboard systems, coor-
dination of the UAV onboard systems, reconfiguration of algorithms of the behavior
control system in case of equipment failures and damage to the UAV structure).
The particular set of algorithms implemented by a specific UAV behavior control
system depends on the tasks for which the UAV is designed.
An advanced “smart” UAV, whose behavior control system implements the above
algorithms, will be:
•
able to assess the current situation based on a multifaceted perception of the external
and internal environment, be able to form a forecast of the situation evolution;
•
able to achieve the established goals in a highly dynamic environment with a sig-
nificant number of heterogeneous uncertainties in it, taking into account possible
counteractions;
•
able to adjust the established goals, and to form new goals and sets of goals, based
on the values and normative regulations (motivation) incorporated into the UAV
behavior control system;
•
able to acquire new knowledge, accumulate experience in solving various tasks, learn
from this experience, modify their behavior based on the obtained knowledge and
accumulated experience;
Appl. Sci. 2021,11, 11611 10 of 30
•
able to adapt to the type of tasks that need to be solved, including learning how to
solve problems that were not in the original design of the system;
•
able to form teams, intended for the interaction of their members in solving some
common problem.
As noted earlier, the usage of UAVs will be really efficient only if they are able to solve
their target tasks as autonomously as possible. This statement means that the human role
in controlling the behavior of the UAV must be minimized. The aim should be to leave only
such functions as formulating tasks to be carried out in the flight operation; controlling
the execution of tasks carried out in the flight operation; adjusting the goals of the flight
operation directly in the flight if the need arises.
The algorithms for controlling the behavior of the UAV when forming and implement-
ing control actions are based on the use of data about the goal of the flight operation, the
current situation, and the available resources. To provide the above tasks, these algorithms
must be based on such critical mechanisms as adaptability, i.e., the ability to adjust to
the changing situation, and learnability, understood as the ability to adjust and extract
experience and knowledge from the activities performed to use them in the future.
In the definitions (5), (6) and (7), the UAV behavior control algorithms correspond to
ΦCS
,
ΦAS
and
ΦIS
elements. They must provide the solving of the two most important
tasks, such as
• providing situational awareness with an assessment of the current and/or predicted
situation to obtain the information required for behavior control actions on the UAV;
• generating control actions that determine the behavior of the UAV.
In this regard, it is reasonable to specify the components of the elements explicitly
ΦCS,ΦAS and ΦIS :
ΦCS =ΦCS
SA ∪ΦCS
CF ,ΦAS =ΦAS
SA ∪ΦAS
CF ,ΦIS =ΦIS
SA ∪ΦI S
CF , (16)
where ΦCS
SA ,ΦAS
SA , and ΦIS
SA are sets of algorithms forming situational awareness; and ΦCS
CF ,
ΦAS
CF , and ΦI S
CF are sets of algorithms forming control actions on UAV.
Our article is mainly devoted to the formation of the approach to obtaining the
elements
ΦCS
SA
,
ΦAS
SA
, and
ΦIS
SA
. No less important, of course, is the problem of forming
elements
ΦCS
CF
,
ΦAS
CF
, and
ΦIS
CF
that provide the generation of control actions for the UAV. In
our opinion, the most promising approach for solving this problem is the approach based
on machine learning methods, in particular, reinforcement learning methods and deep
learning methods [77–80]. This problem, however, requires separate consideration.
Besides the elements
ΦAS
and
ΦIS
, the definitions (6) and (7) also include
ΨAS
,
ΓAS
,
ΨIS
,
ΓIS
,
ΩIS
, and
ΣIS
elements. They provide adaptability and learnability of
ΦAS
and
ΦIS
elements, and goal adjustments and generation in the
IS
system. The creation of all
these elements is a highly complex problem, which also requires separate consideration.
In addition to the above elements and the problems associated with their formation, we
should mention another problem related to the behavior control of the
CS
,
AS
, and
IS
class
systems, including UAVs. Although it plays a supporting role in terms of controlling the
behavior of the system in question, this problem is no less important than those mentioned
above. We are talking about the problem of forming models of UAV behavior and models
of the environment of UAV functioning, to provide the solutions to problems of making
control decisions, and to problems of perception of the situation and assessment of the
situation. We considered this problem and possible approaches to its solution in [72,76].
3. The Situation and Situational Awareness in Behavior Control Problem for UAV
As noted above, the essential element in ensuring the “smart” behavior of a UAV is
forming situational awareness for its control system. Let us clarify the interpretation of the
situation and situational awareness in terms of the behavior control problem for UAVs. Let
us first introduce the necessary definitions and then reveal the meaning of the concepts
contained in these definitions.
Appl. Sci. 2021,11, 11611 11 of 30
Definition 1.
Situation
Λ(RW
,
t)
is the state at time
t
for a
RW
fragment of objective reality
(Real World) in which the UAV is performing a flight operation.
Definition 2.
The scene
SRW (Λ)
is the result of a representation of the situation
Λ(RW
,
t)
in
some perceiving medium.
Definition 3.
Situational awareness
ΛA(SRW
,
Γ)
is information extracted from scene
SRW (Λ)
,
according to the set of control goals Γ.
Let us now explain the concepts used in these definitions.
Definition 1 is based on the concept of objective reality and its fragment
RW
, and on
the notion of the state at time
t
of the fragment
Λ(RW
,
t)
. This state will be further referred
to as the situation at time tor the current situation (see also (14) and (15)).
Objective reality is a set of air and/or water environments, phenomena and objects in
them; and land and/or water surfaces and objects on them. The UAV under consideration
is one of the elements of objective reality and its fragment RW.
Here, phenomena in the air environment (atmospheric phenomena) are visible man-
ifestations of physical and chemical processes occurring in the Earth’s air shell, i.e., in
its atmosphere. It is commonplace to distinguish between several types of atmospheric
phenomena [
81
–
84
]. The first type includes water droplets or ice particles floating in
the air (clouds, fogs); precipitation falling out of the atmosphere (rain, snow, hail, etc.);
precipitation formed on the ground and objects (dew, frost, ice, etc.); precipitation lifted
by a wind from the Earth’s surface (blizzard, etc.). The second includes the transfer of
dust (sand) by the wind from the Earth’s surface (dust or sandstorms) or solid particles
suspended in the atmosphere (dust, smoke, cinder, etc.). Thirdly are phenomena related
to convective transport (ascending and descending air movements), namely, hurricanes,
tornadoes, etc. Fourthly, there are various kinds of electrical phenomena related to light
and sound manifestations of atmospheric electricity (thunderstorms, dusk, ball lightning,
polar lights, etc.). Finally, the fifth type includes optical phenomena caused by refraction
and diffraction of light in the atmosphere (rainbows, mirage, dawn, etc.).
Objects in the air environment are, first of all, flying vehicles of various kinds and
birds. In addition, other types of objects, such as a variety of objects carried by air currents,
may also be in the air environment.
The phenomena in the aquatic environment [
84
] include such things as waves; storms;
typhoons; tsunamis; floods; and ice formation, melting, movement, and compression.
Objects in the aquatic environment are underwater vehicles of various kinds, inhabitants
of the depths, icebergs, etc.
Objects on the Earth and/or water’s surface [
85
–
89
] are any objects of natural and/or
artificial origin.
The above elements of objective reality, if they are part of the fragment
RW
, generate
the external components of the situation λext (t)∈Λext.
The internal components of the situation
λint (t)∈Λint
are generated by the behavior
of the UAV as a controlled object and as an object of monitoring of the UAV “health
state” [67–69]. These components are discussed in more detail below.
A fragment of objective reality is a particular part of
RW
real world to be observed
according to the current needs and capabilities within the context of the UAV behavior
control problem to be solved. Here, the needs are defined by the
Γ
goals of the flight
operation and the tasks to be completed based on those goals. Capabilities are what
observational tools with which the UAV perceives the current situation, both its external
Λext
and internal
Λint
components. At the same time, some of these tools may be external
concerning the UAV. Some of the possible types of these facilities are discussed below. In
this case, the UAV, for which we solve the problem of controlling its behavior, is one of the
elements of the fragment of objective reality
RW
, along with other objects in this fragment.
In other words, by a fragment
RW
of objective reality, we mean the following. Let a
UAV be located at some point of space, for which the problem of controlling its behavior
Appl. Sci. 2021,11, 11611 12 of 30
is considered. This space is filled with some objects, the presence and behavior of which
should be taken into account in the generation of control actions for the UAV. Which
objects should be taken into account depends on the control objectives, on the observation
capabilities of the UAV (range of the radar, number of simultaneously tracked objects,
etc.), and one the ability of the UAV to obtain information from external sources (e.g.,
signals from GPS/GLONASS navigation systems, and information on the UAV’s position
in space using motion capture technology). In addition to objects, the characteristics of
the environment in which they operate should also be considered. This fact is important
because, for example, atmospheric phenomena (clouds, fog, precipitation, etc.) can limit
the capabilities of the observation equipment used to assess the current situation. Thus, we
interpret a fragment
RW
of objective reality as a set of such interrelated elements as objects
“inhabiting” the airspace and the underlying surface (land, surface, underwater). Besides,
these elements also include the environment in which the objects
RW
are “immersed”
(atmosphere, gravitational field, clouds, fumes, etc.).
The following examples of fragments of objective reality can be pointed out:
• UAV flying in airspace over the Earth’s surface;
•
UAV flying in airspace over the ground as part of a team on a prescribed flight
operation;
• UAV flying in airspace over the water surface;
• UAV flying in urban areas.
State of a fragment of objective reality
RW
is a specific set of elements of the real world
that are a part of this fragment at the considered moment in time. Namely, the external
elements of this set will include only the air environment in the task of controlling the
behavior of the aircraft, only the aqueous environment for the two-medium vehicle in the
underwater section of its motion, and the boundary of the air and aqueous environment for
the seaplane when it performs takeoff and landing operations. Besides, this set of elements
includes those phenomena and objects in the atmosphere and/or water environment and
objects on the land and water surface, which take place in the situation under consideration.
In addition, the essential element of the
RW
fragment is, as already noted, the UAV, for
which the behavior control problem is solved.
Definition 2 shows the transition from a fragment of objective reality
RW
to the scene
SRW (Λ)
, i.e., to the reflection of this fragment by perceiving means located both onboard
the UAV and possibly external to the UAV (for example, they may be part of the UAV flight
control ground station means). The scene
SRW (Λ)
in the situational awareness problem
has a dynamic nature; i.e., it generally changes over time.
Scene
SRW (Λ)
is the result of the representation of the situation
Λ(RW
,
t)
in some
perceiving media. Since the considered UAV is one of the elements of the fragment
RW
of
objective reality, and Λ=Λint ∪Λe xt , the following relation takes place:
SRW (Λ) = SRW (Λint )∪ SRW (Λext ); (17)
that is, the scene
SRW (Λ)
is a union of two sets
SRW (Λ)int
and
SRW (Λ)ext
elements,
corresponding to the internal and external components of the situation, respectively.
Representation of objective reality in some perceptual environment, i.e., the transition
from situation to internal
SRW (Λ)int
and external
SRW (Λ)ext
components of the scene
SRW (Λ), is done by using appropriate technical equipment.
Which devices should be used to obtain
SRW (Λ)int
and
SRW (Λ)ext
, and in what
form they should reflect objective reality, depends on the specific tasks to be performed.
The variety of tools that can be used to form
SRW (Λ)int
and
SRW (Λ)ext
is rather
large [90–92].
A not complete list of such tools for SRW (Λ)ext includes:
•
tools for working with data in the visual and infrared range (video cameras, OELS, IR
cameras);
Appl. Sci. 2021,11, 11611 13 of 30
•
acoustic data handling equipment (helicopter lowered hydroacoustic station (HSS) +
hydroacoustic buoys (HBS));
• tools to work with radar data (radar stations of various kinds);
•
external data sources concerning the UAV (GPS/GLONASS, radio navigation tools,
motion capture tools).
Similarly, the list of tools for generating SRW (Λ)int can include facilities such as:
•
tools for sensing UAV state variables (gyroscopes, accelerometers, ROVs, angle-of-
attack and angle-of-slip sensors, etc.);
•
devices and sensors that characterize the state of the atmosphere (air density, turbu-
lence, wind, etc.), for example, meteoradar;
• inertial navigation instruments.
• sensors providing control of the «health state» of the UAV systems and design.
None of the sources of information about environmental objects due to various limita-
tions does not allow getting a complete information picture. For example, one of the most
important for UAVs, the visual channel, cannot cope with such disturbances as cloud cover,
fog, and nighttime. These phenomena are not a problem for the radar channel, but it does
not allow one to perceive the observed object with great detail, which is quite usual for the
visual channel. Hence, to obtain situational awareness, acceptable for control purposes, it
is necessary to get data from several sources and solve the problem of merging information
from them.
Full formation of components
SRW (Λ)int
and
SRW (Λ)ext
of the scene
SRW (Λ)
re-
quires numerous and heterogeneous perceptual devices. This fact causes the problem of
sensor fusion, that is, the problem of obtaining a consolidated information picture based
on data from a set of tools perceiving the components of the situation.
Finally, the last, third definition among those formulated above introduces the concept
of situational awareness
ΛA(SRW
,
Γ)
as information extracted from the scene
SRW (Λ)
,
according to the control goal set
Γ
. This information serves as input to the UAV behavior
control system.
In all cases where situational awareness generation is concerned, an assessment of
the current situation is required, which can be accomplished either by processing the data
from the relevant sensor tools or by calculation based on the first kind of data. In addition,
in some tasks, the prehistory of the situation may also be required.
Let us take the traditional representation of the controlled motion model in the state
space as an example:
˙
x=f(x,u),
y=g(x,u),
u=h(y),
(18)
where
x= (x1
,
. . .
,
xn)
is vector of states,
u= (u1
,
. . .
,
um)
is vector of controls, and
y= (y1
,
. . .
,
yr)
is vector of observations. If the system (18) belongs to the class
CS
, then its
element
g(x
,
u)
is a standard observer [
93
]. As a rule, the (18) must take into account the
constraints on the values of the controlled object states, i.e.,
xi∈Xi,i=1, 2, . . . , n;x∈X,X⊆X1×X2×. . . ×Xn. (19)
A particular important case of constraints (19) is their dynamic version, in which the
values of constraints depend on the current state of the UAV and on time. An example
of such constraints could be a dynamically formed boundary of the region of allowed
states, formed when the UAV moves in a restricted environment (urban area, a narrow and
winding gorge in mountainous terrain, etc.).
For the case of the
AS
and
IS
class systems, we can say that the formation of situational
awareness is assigned to the observer version, significantly extended compared to (18).
Its functions are raised because it is not only the values of state variables that need to be
Appl. Sci. 2021,11, 11611 14 of 30
evaluated. These values are only a part of the internal components of the situation (there are
also components of a «diagnostic» nature, there may also be components characterizing the
operation of the aircraft design, required for active control of the structure). However, there
are also external components of the situation, representing the state of the environment in
which the considered UAV operates.
Consequently, the advanced observer works not only with data from traditional
sensors of various kinds (gyroscopes, accelerometers, speed and position sensors of the
aircraft, etc.) but also with data coming from such devices as video cameras, radars,
infrared cameras, range finders, ultrasonic meters, etc.
Here you can see similarities with traditional feedback control methods. In these meth-
ods, the current situation is a state vector for the controlled object. Situational awareness
is formed with the help of an observer (a complex of measuring and computing means),
and observability of state variables is usually incomplete. Using only the estimation of the
current situation to generate control signals is P-control, which is known to be ineffective.
Improvement of control quality can be achieved by considering the rate of change of state
variables when generating control signals by introducing derivatives for state variables into
the control law. In this case, we move from P-control to PD-control. Taking into account the
prehistory is the introduction of an integral component in the control law, i.e., the transition
to PID control.
4. Levels and Kinds of Situation Awareness in Behavior Control Problem for UAV
Solving the task of forming situational awareness of the UAV behavior control system
assumes the ability to answer the following basic questions:
(1)
What is the goal of the problem? I.e., what is to be obtained as a result of its solution
(the specific composition of the situational awareness component)?
(2) Which source data are needed to solve the problem under consideration, and how they
can be obtained?
(3)
What should the results of the problem to be solved look like?
(4)
Into which subtasks is the original problem broken down, and what problems arise in
solving these subtasks and the problem as a whole?
Regarding the answer to the first of the above questions, the goal of the situational
awareness task is to provide information support for the decision-making process in the
problem of UAV behavior control. The list of required situational awareness components
is determined by the objectives of the flight operation being implemented, and by the
tasks to be completed during its implementation. As a rule, this list will include the
following elements:
•
list of objects detected in the scene
SRW (Λ)
(the result of solving the problem of
localization of objects in RW);
•
classes of objects detected and localized in the scene
SRW (Λ)
(the result of solving
the problem of classifying objects in RW);
• list and localization of objects of given (prescribed) classes in the scene SRW (Λ);
•
results of tracking the behavior of the prescribed objects in
RW
, and the prediction of
this behavior.
In addition, the required situational awareness components may include other
elements—in
particular relationships between objects in the scene
SRW (Λ)
given the
dynamics of its change, a list of objects in
RW
for which the fact of belonging to a grouping
solving a common problem is established, results of tracking the behavior of a grouping of
objects in
RW
and forecasting this behavior, results of analysis of the goals of individual
objects and their groupings in RW, etc.
The answers to the second and third questions depend on the specific composition
of the situational awareness components required to implement the goals of the flight
operation executed by the UAV in question. Since computer vision plays a critical role in
the UAV behavior control tasks, given the dynamic nature of the situational awareness
Appl. Sci. 2021,11, 11611 15 of 30
formation process for the UAV control system, one of the primary sources of input data
will be video streams obtained from one or more video cameras. There can also be other
sources of data, in particular, as already mentioned above. For the external components
of the situation, it can be radars, infrared cameras, etc. For the internal components, we
can have various kinds of sensor equipment, allowing us to measure the values of UAV
state variables and the values characterizing the “health state” of systems and design of
the UAV.
The answer to the fourth question is an extensive (but not exhaustive) list of tasks
that are subtasks of the original situational awareness task. This list includes the following
elements:
(1)
scene formation
SRW (Λ)
based on one or more sources of input data (for example,
video streams, data from infrared cameras and/or radar);
(2)
identification and classification of objects in the considered fragment of
RW
objective
reality (object localization) in
RW
, semantic segmentation of the scene (semantic
segmentation), identification of scene objects
SRW (Λ)
(object classification), detection
of spatial and other relationships between scene objects SRW (Λ));
(3)
object tracking (object tracking) in the considered fragment of
RW
objective reality,
including objects of specified (prescribed) classes (detecting the position relative to
the UAV of objects detected in
RW
, tracking the change of this position in time), and
predicting the behavior of objects, all or selectively (trajectory prediction), located
in RW;
(4)
revealing behavioral patterns of objects (pattern extraction) of a fragment
RW
of
objective reality and on this basis revealing groupings of objects in
RW
, patterns
in the behavior of the revealed groupings of objects, and predicting the behavior of
these groupings;
(5)
revealing behavior goals of objects (all or selectively) in the considered fragment
RW
of objective reality, and groupings of objects in RW.
5. Forming Elements of Situational Awareness for UAVs
Generating situational awareness elements for the UAV control system is not an easy
task. Recently, several attempts have been made to solve this problem with machine
learning methods [
31
,
49
–
53
]. In our opinion, machine learning techniques, including
deep learning methods, are very promising tools for solving UAV situational awareness
problems. In this section, we consider two examples that illustrate the application of the
proposed approach to the formation of situational awareness elements for UAVs based on
machine learning methods.
5.1. The Components of Situational Awareness That Ensure the Tracking of Objects in a Fragment
of Objective Reality and Predicting Their Behavior
5.1.1. Reference Systems That Provide Object Tracking in a Fragment of Objective Reality
The task of tracking objects and predicting their behavior in the fragment of objective
reality
RW
, which includes the considered UAV, is one of the essential components of the
general problem of forming situational awareness required to control the behavior of the
UAV. This problem can be formulated and solved in several ways. First of all, they will
differ from each other by obtaining data about the motion (behavior) of the tracked object.
The motion of an object from
RW
can be represented in different coordinate systems,
in particular
• in a coordinate system associated with some point on the Earth’s surface;
•
in a coordinate system associated with some moving object (aircraft, ship, ground
moving object) outside the considered UAV;
• in a coordinate system associated with the considered UAV.
In other words, it is essential to answer the question: where is the observer whose
data is used to track the motion of objects in RW:
Appl. Sci. 2021,11, 11611 16 of 30
•
observer is at some point immobile relative to the Earth’s surface (for example, this
could be the flight control point of a UAV);
•
observer is on a movable platform (e.g., it could be a ship or some other movable
carrier for the UAV flight control point);
•
observer is on a platform that vigorously changes its phase state, including both its
trajectory and angular components (in particular, it could be an aircraft acting as a
leader in a constellation of aircraft);
•
observer is onboard the UAV, which can vigorously change the components of its
trajectory and/or angular motion.
In this case, if the observer is outside the UAV, for which the task of behavior control
is solved, the measurement results must either be transmitted by radio channel onboard
the UAV for the subsequent solution to the tracking problem, or this task is solved using
an external observer, for example, using Motion Capture technology. In this case, onboard
of the UAV are transmitted already ready results of the solution for the task of tracking.
In the third case of the above list, the observer platform not only vigorously maneuvers,
i.e., changes its trajectory motion, but it also vigorously changes its angular orientation in
space (such behavior is typical, for example, for multicopters).
An essential question, in this case, is the following: what set of quantities should
describe the motion (behavior) of the observed object, and as noted above, relative to what
(i.e., in what coordinate system), to detect and track the motion of objects in
RW
? In the
problem of controlling the UAV behavior, the most natural representation of the motion of
these objects relative to the UAV in question looks the most natural. In this case, we obtain
values, first of all, for the quantities describing (characterizing) the motion of the center
of mass of the observed object (coordinates of the center of mass; components of velocity
and acceleration vectors, maybe overloading—as an indicator of the degree of energy of
maneuvering). In addition, it is also necessary to obtain components characterizing the
angular motion of the observed object.
Thus, we are interested not only in how the center of mass of the observed object
moves relative to the UAV in question, but also from what perspective we observe it from
the side of this UAV. This information can be important for the control system of the UAV
behavior in terms of estimating the possible intentions of the “neighbors” of the UAV
in RW.
5.1.2. An Example of Solving the Problem of Analyzing the Behavior of a Dynamic Object
in a Fragment of Objective Reality
When solving real-world unmanned vehicle control problems (especially in UAV con-
trol problems), the task of analyzing and predicting the behavior of various environmental
objects is essential.
As noted above, the behavior (11) of some observable object as a dynamic system
S
in
a fragment
RW
of objective reality is a sequence of states
x(ti)∈X
of this object, tied to
the corresponding time moments ti∈T—i.e.,
{hx(ti),tii},x(t) = (x1(t),x2(t), . . . , xn(t)),ti∈[t0,tf]⊆T,i=0, 1, . . . , N. (20)
The components of situational awareness about the behavior of the considered object
are the results of observing it. These results are obtained in one way or another for the
model (18):
{hy(ti),tii},y(t) = (y1(t),y2(t), . . . , yr(t)),ti∈[t0,tf]⊆T,i=0, 1, . . . , N. (21)
The task of analyzing and predicting the behavior of some object from
RW
is to
reconstruct the functional dependence describing the obtained data (21), and then use it to
predict behavior for psteps ahead.
We considered a corresponding example of solving the problem of aircraft behavior
analysis and prediction in [
51
]. This example uses synthetically generated trajectories
Appl. Sci. 2021,11, 11611 17 of 30
obtained with the FlightGear Flight Simulator [
94
] for the Schleicher ASK 13 glider as the
source dataset. Each entry in the dataset is a collection of state vector values for a given
aircraft: flight altitude; geographic coordinates (latitude and longitude); and pitch, roll,
and yaw angles.
An example of a part of the trajectory from the generated dataset is shown in
Figure 1
.
The points on the trajectory are labels of geographic coordinates and altitude. For the
convenience of perception, the points are connected in chronological order. The darker
points correspond to the beginning of the part of the trajectory and the lighter ones to
its end.
Figure 1. An example of a part of the trajectory for the generated dataset.
The obtained values of geographic coordinates were converted into values for the
Universal Transverse Mercator (UTM). In addition, absolute coordinates were converted
into displacement values
∆xk=xk−x(k−1)
at
k
step relative to
k−
1 step, for all
k
, during
data preparation. The duration of each experiment was 1000 s, with intervals between
measurements of 1 s. The initial flight altitude was assumed to be 5000 m, and the initial
speed was 250 km/h.
The data acquired in this way were used as training, validation, and test data to obtain
models for analyzing and predicting the aircraft’s motion in question.
To solve the described problem, a relatively simple version of a recurrent neural
network with a combination of LSTM blocks [
95
–
98
] in combination with a fully connected
layer is used [77].
The training of the constructed network was performed based on the
x
,
y
, and
z
displacements introduced above, independently for each target coordinate. We did not
predict the angles responsible for the angular orientation of the aircraft.
Appl. Sci. 2021,11, 11611 18 of 30
The effectiveness of the solving of the considered problem can be evaluated by the
results presented in Figure 2. It shows the model errors in predicting by variables describing
the trajectory motion of the tracked aircraft.
Figure 2.
Model errors when forecasting by coordinates
x
,
y
,
alt
in the universal Mercator projection.
Here
∆x
,
∆y
and
∆alt
are displacements at
k
step relative to
k−
1 step in longitudinal range, lateral
range and altitude, respectively.
A more detailed description of this experiment and its results are presented in our
article [51].
5.1.3. The Problem of Team Behavior in Object Tracking in a Fragment of Objective Reality
In the general case, the formation of situational awareness for the UAV behavior
control system requires tracking not a single object in
RW
, but some set of objects. There
can be many such objects in
RW
. Still, from the point of view of UAV behavior control
purposes, usually only a part of them is of interest, which simplifies the problem of tracking
objects somewhat.
At the same time, it is essential to be able to identify the collective behavior of objects.
This requirement means that it should be possible to understand that around the UAV in
Appl. Sci. 2021,11, 11611 19 of 30
RW
is not some swarm of objects, each of which “occupies its own business,” but a team
of objects solving some common task for them.
Further, there is a task of revealing the goals of collective behavior. In this case, there
appears a possibility not only to track but also to predict team behavior of objects. In this
case, the task of tracking individual objects (object tracking) is included as an important
part (subtask) in the more general problem of team tracking.
When analyzing and predicting the collective behavior of observed objects, we should
keep in mind that this behavior can vary in the level of complexity in a wide range: from
very simple forms to extremely complex. Examples of the simplest collective behavior
of observable objects can be aircraft formation flight without vigorous maneuvering of a
group, motion of ground objects as a part of a column, the motion of surface ships as a part
of an order. As an example of complex team behavior, it is possible to specify individual
maneuvering of observable objects, being a part of a group. Still, this maneuvering is
subordinated to a common goal or a set of goals. Thus, maneuvering can be carried out
within wide limits. The task of detection and prediction of group behavior in this variant is
even more complicated if the objects included in the grouping are separated in space
RW
for a significant distance, for example, as is the case in the AWACS system (radar aircraft +
aircraft, to which it transmits the data obtained about the detected targets).
As a set of objects solving some joint task, a formation can include both mobile and
stationary objects. An example of such a formation is any aerial target interception system,
including one or more fighter-interceptors, and ground-based guidance and flight operation
control facilities. Another example is remotely piloted UAVs. Here again, the formation
includes one or more UAVs and a ground control center.
Another critical task is to identify the coherence (pattern) of the individual behaviors
of the objects that make up a grouping united by a common goal or set of goals. The
analysis of such coherence can serve as a tool for revealing the purpose of group behavior.
An interesting case is when the preferences for the individual targets included in this
complex may differ for the objects included in the grouping. An example here is a flight
operation performed by a pair of aircraft. In this case, the pair has a common goal (this
is the goal of the flight operation). At the same time, the tasks solved by the master and
the slave of the pair (and, consequently, their goals) are different. For example, the main
task of the master is to attack the enemy; the main mission of the wingman is to cover the
master. However, under certain circumstances, the slave may attack the target, and the
master may—cover it, i.e., the tasks of attacking and defending are assigned to both the
master and the slave of the pair but have different priorities for them.
In a grouping, the individual goals, i.e., the goals of each member of the grouping, are
subordinated to a common (group) goal or a set of goals. A set of goals can be in vector
form (there are no connections between the goals) or structured (some relation is set on
the set of goals, for example, the goals can be arranged in a tree structure). One can also
talk about a set of goals that includes a common (group) goal together with the individual
goals of the objects included in the grouping (here an example is AWACS together with the
objects “tended” by this system).
To summarize the above, we can specify the following characteristic issues arising for
different variants of the tasks of tracking objects in RWand predicting their behavior:
•
What set of quantities is required to describe the behavior of tracked objects, based on
the specifics of the particular UAV behavior control problem being solved?
•
Are scene objects tracked by onboard means of the UAV, or by means external in
relation to the UAV with transferring them to the UAV?
•
In relation to which coordinate system is the measurement data presented? (Internal,
in relation to the origin in the center of mass of the UAV; external, the origin lies outside
the UAV—for example, tied to the ground command and measurement complex).
•
By which means are the internal components of the situation measured?—in particular,
the components describing the trajectory and/or angular motion of the UAV. By
Appl. Sci. 2021,11, 11611 20 of 30
measuring instruments onboard the UAV, or by external means (for example, by
means of motion capture technology)?
The trajectory prediction problem in
RW
as part of the situational awareness problem
for UAVs can be formulated in several ways, differing in the level of detail of the results
obtained and, accordingly, the level of complexity.
Namely, the simpler case takes place when there are measurement results demon-
strating how the values of state variables of the observed object change. In this case, the
question arises how these measurements are made. One of the possible variants consists of
the use of onboard means of the observed object (for example, the recorder of values of
state variables of the given object, being a part of its onboard means). The second option
is that measurements are carried out by some external set of means, stationary in the
“start coordinate system”—for example, included in the flight control point of remotely
piloted UAV.
In a somewhat more complicated version of the problem of predicting the behavior
of objects in
RW
measurements are carried out onboard of some moving (possibly, on all
six degrees of freedom) observer object. In this case, the observer object can be either the
UAV itself, for which the behavior control problem is solved, or some external source, for
example, a radar observation aircraft (AWACS). If a video stream is used as the basis for the
measuring complex, then the information source is frames from the video stream, possibly
combined with a rangefinder.
It should be emphasized, that from the point of view of the needs of the UAV behavior
control system, eventually, the results of the trajectory prediction problem should be
presented in a moving (relative to the starting SC) coordinate system, the origin of which is
related to the UAV in question. Thus, as an element of situational awareness of the UAV
behavior control system, we are interested, first of all, in the motion of objects of the scene
relative to the considered UAV, although in some cases it may be necessary to represent
these results in any other coordinate system.
5.2. Semantic Image Segmentation as a Tool for Forming Situational Awareness Elements in UAV
Control Tasks
5.2.1. A Formation Experiment for Visual Components of Situational Awareness
As mentioned above, an essential part of UAV situational awareness is data about
objects in the space around the UAV. These objects can be flying vehicles or other things,
such as birds. In addition, some tasks performed by UAVs require information about
objects on the Earth’s surface. In particular, this information will be needed by the control
system of the UAV when performing a landing. This information will be critical when the
UAV is required to land on an unprepared site in an unfamiliar location. In this situation,
computer vision allows the control system to select a place that meets the conditions for a
safe landing.
The main element of the task mentioned above of selecting a suitable landing site for
the UAV is the semantic segmentation of the scene observed by video surveillance. The
results of such a solution make it possible to identify areas of terrain suitable for landing
and objects, including moving ones, which can interfere with the landing. The problem of
tracking and predicting their motion, which was discussed in the previous section, needs to
be solved for dynamic scene objects. The solution for this problem is necessary for the UAV
behavior control system to have information to prevent potentially dangerous situations.
We discuss this task further using the example of an urban scene to perform more accurate
maneuvers and land UAVs in an urban environment.
5.2.2. Description of the Dataset Used in the Experiment
The Semantic Drone Dataset was developed at the Institute of Computer Graphics,
and Vision at Graz University of Technology (Austria) [
99
] to solve problems such as
those described above. We used a modified version of this dataset, which is called Aerial
Appl. Sci. 2021,11, 11611 21 of 30
Semantic Segmentation Drone Dataset. It differs from the original dataset in that it has four
additional classes (unlabeled, bald-tree, ar-marker, conflicting) [100].
This dataset focuses on understanding urban scenes and is designed to improve
autonomous UAVs’ flight and landing safety. The original dataset consists of 400 pub-
licly available and 200 private images of 6000
×
4000 pixels taken from a high-resolution
(24 megapixel) camera. The imagery ranges in altitude from 5 to 30 meters above the
ground. Additionally, the dataset contains masks for semantic segmentation into 20 classes
(tree, grass, other vegetation, dirt, gravel, rocks, water, paved area, swimming pool, person,
dog, car, bicycle, roof, wall, fence, fence-pole, window, door, obstacle).
5.2.3. Preprocessing of the Data Used
Since only 400 images and semantic segmentation masks are available for public use,
we partitioned the dataset as follows:
• the training set contains 300 images;
• the validation set contains 50 images;
• the test set contains 50 images.
Before entering the neural network, the image was compressed to a size of
1056 ×704 pixels.
To extend the dataset in use, the albumentations [
101
] library was used, which pro-
vides additional examples by affecting images from the original dataset with the follow-
ing operations:
• horizontal flip;
• vertical flip;
• grid distortion;
• random changes in brightness and contrast;
• adding Gaussian noise.
5.2.4. Description of Approaches to Solving the Semantic Segmentation Problem
Neural networks Unet [
102
], PSPNet [
103
], DeepLabV3 [
104
] were used as basic
models of semantic segmentation. Pre-trained networks on the ImageNet [
105
] dataset,
such as MobileNet V2 [106] and ResNet34 [107], were used to extract high-level features.
A training length of 50 epochs was performed using the PyTorch library, with a batch
size of 3. The cross-entropy loss criterion was used as the loss function. Optimization was
performed using AdamW [108] method (Adam optimizer version with weight decay).
5.2.5. Results of the Semantic Segmentation Problem
The results of the learning of the considered neural networks are shown in Table 1.
Table 1. Comparison of the training results of the models under consideration.
Segmentation
Model Backbone Mean IoU Pixel
Accuracy
Prediction
Time, s
Unet MobileNet v2 0.49 0.89 0.067
PSPNet ResNet34 0.45 0.88 0.045
DeepLab V3 ResNet34 0.51 0.9 0.14
Exemplary evaluations the quality of semantic segmentation for pairs of networks
Unet–MobileNetV2, PSPNet–ResNet34, and DeepLab–ResNet34 are shown in Figure 3.
Appl. Sci. 2021,11, 11611 22 of 30
Figure 3.
Examples of performance evaluation for pairs of networks Unet–MobileNetV2, PSPNet–
ResNet34, and DeepLab–ResNet34. In the upper part of the figure the original images are shown,
followed by the markup mask (the reference markup of the corresponding image). Then, the results
of the specified pairs of networks are shown.
We see that DeepLabV3, due to its more complex neural network architecture, shows
the best generalization ability, while being the most computationally complicated of all
the architectures considered. The PSPNet model shows slightly worse values of quality
metrics compared to DeepLabV3, while being the fastest of the architectures considered.
The network based on the Unet architecture in this example is an intermediate case in
terms of accuracy and speed.
5.2.6. Ways to Increase Data Processing Speed When Solving the Semantic
Segmentation Problem
To increase the performance of the neural network used, some combination of the
following steps can be applied:
• use more advanced neural network architectures;
•
perform low-level optimization of the computation graph for the particular computing
architecture used onboard (this can be done both manually and with third-party tools);
•
perform downscaling of neural network weights, use simple quantization and quanti-
zation aware training;
Appl. Sci. 2021,11, 11611 23 of 30
• apply low-rank factorization;
• optimize convolutional filters;
• build new, more efficient blocks;
•
use neural network knowledge distillation to train a much simpler learner model on a
mixture of responses from the larger teacher model and the original model itself;
• use automated neural network architecture (NAS) search.
To achieve high performance while maintaining accuracy, we will use the knowledge
distillation approach [
109
–
111
] based on the outputs of the teacher’s model (response-based
knowledge distillation). Figure 4shows the teacher-student architecture used. The teacher
model (marked in red) is pre-trained on the dataset being used.
The student model is a simpler neural network architecture PSPNet with the net-
work encoder MobileNetV2. This choice is due to the computational efficiency of the
MobileNetV2 network on low-power devices and its availability in standard libraries.
The loss function has two parts:
• distillation loss
This part of the loss function is responsible for comparing the soft predictions of the
teacher and student models. The better the student’s model repeats the results of the
teacher’s model, the lower the value of this part of the loss function.
• student loss
This part of the loss function, as in ordinary training, compares the results of the
student’s model (hard labels) with the markup (ground truth)
Data
Teacher model
Student model
Soft labels
Soft labels
Hard labels
Student loss
Distillation loss
Ground truth labels
Total loss
Figure 4. Response-based knowledge distillation.
As a teacher model, we will use the DeepLabV3 neural network architecture with
a network encoder se-resnet50 (https://arxiv.org/pdf/1709.01507.pdf (accessed on 26
November 2021)).
As a student model, we will use the PSPNet neural network with the MobileNetV2
encoder network MobileNetV2.
The neural network teacher was trained on the Semantic Drone Dataset dataset for
seven days on an Nvidia Tesla V100 graphics card.
The student model was trained in a similar way, but for four days. After that, knowl-
edge was distilled from the larger DeepLabV3 model to a simpler model based on the
PSPNet architecture.
Comparison of models and results of distillation are shown in Table 2.
Table 2. Comparison results of teacher and student models.
Model
Type Model Backbone Model
Size Mean IoU Pixel
Accuracy
Prediction
Time, s
teacher DeepLabV3 SE-Resnet50 162 MB 0.644 0.93 0.2
student PSPNet MobileNetV2 9.4 MB 0.516 (0.612) 0.9 (0.91) 0.041
Appl. Sci. 2021,11, 11611 24 of 30
As a result, we have obtained a student model, which is slightly inferior in accuracy to
the teacher model—namely, the value of mean IoU is 0.644 for the teacher model vs. 0.612
for the student model after distilling the knowledge and pixel accuracy of 0.93 vs. 0.91
for the teacher and student models, respectively. At the same time, the resulting student
model takes up much less space (almost 17 times less) and runs nearly five times faster.
The result of the student’s model after knowledge distillation is shown in Figure 5.
We see that after knowledge distillation, the model can run on this computer almost
in real-time, 1
s/
0.041
s≈
24.4 FPS (frames per second), with batch size = 1. Increasing the
batch size will increase the FPS, but the latency will increase because the system will be
waiting for the entire batch to be processed. This latency may be critical when controlling
the HA-UAV. The optimal value is selected based on the requirements for the specified
system, and the capabilities of the computer installed onboard the HA-UAV.
Additional speed increase can be achieved by applying various low-level optimiza-
tions, such as:
•
permanently reserving the right amount of memory on the GPU for incoming images,
so that a resource-intensive operation of allocating and freeing memory is not required
when a new batch arrives;
• to transfer preprocessing and postprocessing data to the GPU;
• organize efficient work with video streams by using specialized libraries;
• to optimize the model for a specific GPU;
• rewriting program code of the model in C++, using high-performance libraries.
Thus, by performing some optimizations, it is possible to achieve high performance for
fairly resource-intensive semantic segmentation models even on relatively weak processors.
Figure 5.
An example of the quality of a student’s network based on the PSPNet architecture with the
MobileNetV2 encoder network after knowledge distillation. The first column of the figure shows the
original images. The next shows the markup masks (reference markup of the corresponding image).
The last column shows the results of the student’s network operation after knowledge distillation.
Appl. Sci. 2021,11, 11611 25 of 30
6. Areas of Further Research on Situational Awareness for Highly Autonomous UAVs
Consideration of the situational awareness problem for the HA-UAV shows that there
are still a number of topics that need to be considered.
The problem of situational awareness formation can be solved in different ways.
Which one to choose for a particular type of HA-UAV depends on many factors. Among
these factors are the goals of using the HA-UAV, the variety of tasks it solves, and the
limitations on available resources when forming elements of situational awareness. In
order to compare possible variants of situational awareness formation and then choose
the one that satisfies the presented conditions, it is necessary to quantify these variants. In
other words, it is essential to formulate a set of criteria (metrics) that allow evaluating the
accuracy and completeness of situational awareness. In addition, in terms of these criteria,
it is necessary to estimate the level of situational awareness required from the point of
view of HA-UAV behavior control objectives. The extension of this problem is to optimize
the level of situational awareness needed from the point of view of HA-UAV behavior
control objectives.
As always, the critical element in optimization tasks is the optimization criterion (or
set of criteria). As applied to UAVs, the choice of the type of optimization criteria for the
level of situational awareness is determined, first of all, by the goals of flight operations
realized by UAVs. In addition, these criteria may set some complementary conditions. One
more variant of the statement of an optimization problem for situational awareness consists
of maximization of a level of situational awareness for the given amount of computing
resources, which can be spent to form situational awareness. In particular, this problem can
be posed as a search for the level of situational awareness minimally sufficient to satisfy the
UAV behavior control objectives. Solving such a problem will make it possible to minimize
the expenditure of resources on situational awareness formation.
As it is well known, all control processes are purposeful. Accordingly, the processes of
information support for UAV behavior control, i.e., the formation processes of situational
awareness, will also be purposeful. For this reason, it is necessary to investigate the
mechanisms of goal setting, i.e., the tools for adjusting the existing goals and forming new
goals that guide the operation of the UAV behavior control system. To reduce the amount
of computation required to create elements of situational awareness, attention management
and attention allocation mechanisms play essential roles according to the goals of the
flight operation implemented by the UAV. These mechanisms should make it possible
to reallocate available resources to provide, as fully as possible, information support for
critical actions in the current situation. This requirement means that we can consider the
mechanism of goal adjustment and goal-setting combined with the mechanism of attention
management as a tool for the dynamic formation of situational awareness elements.
When a flight operation is performed not by an individual UAV but by a group of
UAVs, there is a problem of situational awareness formation both for individual UAVs
and for groups of UAVs, solving a common task. At the same time, situational awareness
of individual UAVs included in the grouping should be formed based on the goals for
individual UAVs, which are subgoals of the general goal of the flight operation for the
grouping as a whole. For the grouping as a whole, situational awareness is formed based on
the goals of the flight operation performed by the grouping. The problem of sensor fusion,
i.e., the problem of forming elements of situational awareness and situational awareness
as a whole based on data coming from different sources, is of particular importance here.
For individual UAVs, it could be computer vision, radar, infrared cameras, etc. At the
UAV-group level, the sensor fusion problem is to combine the data coming from the UAVs
in the grouping into a single information picture to form the situational awareness of
the grouping.
An important task is to create representation formats for various situational awareness
elements, which would be the most effective in terms of information support for behavior
control algorithms of both individual UAVs and teams of UAVs solving a common task.
In our opinion, standardization of such formats is reasonable, as in this case, there is
Appl. Sci. 2021,11, 11611 26 of 30
a possibility to accumulate libraries of tools for the formation of situational awareness
elements. These libraries will significantly reduce the time and cost of creating appropriate
subsystems for the UAV behavior control system.
When a HA-UAV is used for observational tasks, there is a problem of situational
awareness formation for “external” use, when a HA-UAV or a group of HA-UAVs perceives
and evaluates the current situation not only for its use by the HA-UAV behavior control
algorithms but also for some consumer, which is outside the HA-UAV. Usually, in such tasks,
“raw” data on the current situation, such as a video signal, is transmitted to the consumer.
It would be more expedient to enable situational awareness onboard HA–UAV, which
meets the goals of the information consumer. This approach would considerably reduce
requirements on the channel’s bandwidth for linking the HA-UAV with the consumer.
Another area of research in situational awareness for UAVs is related to the tools
for implementing algorithms for the formation of situational awareness elements. As the
already available experience in solving problems of similar nature and level of complexity
shows, machine learning provides the most effective tools at our disposal. We can con-
fidently say that the methods and tools of machine learning represent the primary tool
for forming the elements of situational awareness. In this field, technologies of creating
situational awareness elements using various neural architectures, including both feed-
forward neural networks and recurrent networks, require consideration. Deep networks
and deep learning have considerable potential. In our opinion, the use of reinforcement
learning is also very promising. In this case, the formation of the corresponding neural
architectures must be carried out to meet best the solution of the problem of the formation
of situational awareness, taking into account the limitations on the characteristics of the
computing facilities available onboard the VA-UAV. This requirement means that there is a
need to develop information technologies that effectively implement situational awareness
formation algorithms within the available computing resources.
7. Conclusions
In this article, we proposed and analyzed the concept of situational awareness as ap-
plied to the problem of behavior control for highly autonomous UAVs. We have structured
situational awareness for this class of aircraft, and analyzed its levels and types. We have
demonstrated the specifics of situational awareness formation for HA-UAVs and analyzed
its differences compared to situational awareness for manned aircraft and remotely piloted
HA-UAVs. We showed the importance of using machine learning methods and tools to
forming elements of situational awareness in HA-UAV behavior control tasks. We have
also considered ways of developing situational awareness research to take into account the
specifics of HA-UAVs.
As examples, we highlighted and discussed the two essential elements of situational
awareness for HA-UAVs in more detail. The first one is related to the analysis and predic-
tion of the behavior of various objects in the environment in which the HA-UAV operates.
The solution to this problem allows us to track objects in the vicinity of the HA-UAV,
whose behavior should be taken into account by the control system of the HA-UAV. The
problem is solved by machine learning using a recurrent neural network, which contains a
combination of LSTM blocks with a fully connected layer.
The second problem is related to the formation of visual components of situational
awareness. The problem of semantic segmentation of scenes coming from video surveil-
lance tools is solved. The approach to the solution for this problem was illustrated by the
example of the identification of objects on the ground surface when selecting a landing site
for a HA-UAV in unfamiliar terrain.
The task of semantic segmentation, in this case, was solved using convolutional neural
networks. Architectures Unet, PSPNet, and DeepLabV3 were considered. A comparative
analysis of these architectures in terms of solving the problem of selecting a suitable landing
site in an urban environment was performed.
Appl. Sci. 2021,11, 11611 27 of 30
To improve the accuracy of the relatively lightweight semantic segmentation network
based on the PSPNet-MobileNetV2 architecture, the knowledge distillation approach was
applied based on the teacher’s network responses. Thanks to this approach, it was possible
to bring the student’s network model closer to the teacher’s model in terms of the accuracy
of the work.
The essential requirement for the tools that form the considered part of situational
awareness is their cost-effectiveness. That is, they should solve the problem with the
least possible consumption of computing resources. One possible approach to meet these
requirements is the use of the neural architecture search (NAS). This approach is aimed
at selecting the optimal neural network architecture for solving perception problems,
and its development represents a direction of further research development in the field
under consideration.
Author Contributions:
Conceptualization, Y.V.T.; methodology, Y.V.T.; software, D.M.I. and P.A.K.;
validation, P.A.K., D.M.I. and Y.V.T.; formal analysis, P.A.K. and Y.V.T.; investigation, Y.V.T. and P.A.K.;
resources, P.A.K.; data curation, P.A.K.; writing—original draft preparation, P.A.K.; writing—review
and editing, Y.V.T.; visualization, P.A.K.; supervision, Y.V.T.; project administration, Y.V.T.; funding
acquisition, Y.V.T. All authors have read and agreed to the published version of the manuscript.
Funding:
The publication has been prepared within the framework of the Program of foundation and
development of world-class research center “Supersonic” for 2020–2025 with the financial support
of the Ministry of Education and Science of Russian Federation (Agreement of 16 November 2020,
number 075-15-2020-924).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Finn, A.; Scheding, S. Developments and Challenges for Autonomous Unmanned Vehicles; Springer: Berlin/Heidelberg, Germany, 2010.
2.
Valavanis, K.P. (Ed.) Advances in Unmanned Aerial Vehicles: State of the Art and the Road to Autonomy; Springer: Berlin/Heidelberg,
Germany, 2007.
3.
Martynyuk, A.A.; Martynyuk-Chernienko, Y.A. Uncertain Dynamic Systems: Stability and Motion Control; CRC Press: London,
UK, 2012.
4.
Endsley, M.R.; Garland, D.J. (Eds.) Situation Awareness Analysis and Measurement; Lawrence Erlbaum Associates Inc.: Mahwah, NJ,
USA, 2000.
5.
Endsley, M.R.; Jones, D.G. Designing of Situation Awareness: An Approach to User-Centered Design, 2nd ed.; CRC Press: London,
UK, 2004.
6. Endsley, M.R. Toward a theory of situation awareness in dynamic systems. Hum. Factors 1995,37, 32–64. [CrossRef]
7. Endsley, M.R. Situation awareness misconceptions and misunderstandings. J. Cogn. Eng. Decis. Mak. 2015,9, 4–32. [CrossRef]
8. Endsley, M.R. Situation awareness: Operationally necessary and scientifically grounded. Cogn. Technol. Work. 2015,17, 163–167.
[CrossRef]
9.
Endsley, M.R. From here to autonomy: Lessons learned from human-automation research. Hum. Factors
2017
,59, 5–27. [CrossRef]
10. Endsley, M.R. The limits of highly autonomous vehicles: An uncertain future. Ergonomics 2019,62, 496–499. [CrossRef]
11.
Endsley, M.R. Situation awareness in future autonomous vehicles: Beware of the unexpected. In Proceedings of the 20th Congress
of the International Ergonomics Association (IEA 2018), Florence, Italy, 26–30 August 2018; Advances in Intelligent Systems and
Computing Book Series; AISC: Chicago, IL, USA, 2019; Volume 824, pp. 303–309.
12.
Endsley, M.R. The divergence of objective and subjective situation awareness: A meta-analysis. J. Cogn. Eng. Decis. Mak.
2020
,14,
34–53. [CrossRef]
13. Walker, G.H.; Stanton, N.A.; Salmon, P.M. Vehicle Feedback and Driver Situation Awareness; CRC Press: London, UK, 2018.
14.
Stanton, N.A.; Salmon, P.M.; Walker, G.H.; Houghton, R.J.; Baber, C.; McMaster, R.; Salmon, P.; Hoyle, G.; Walker, G.; Young, M.S.;
et al. Distributed situation awareness in dynamic systems: Theoretical development and application of an ergonomics methodol-
ogy. Ergonomics 2006,49, 1288–1311. [CrossRef]
15.
Salmon, P.; Stanton, N.; Walker, G.; Green, D. Situation awareness measurement: A review of applicability for C4i environments.
Appl. Ergon. 2006,37, 225–238. [CrossRef]
Appl. Sci. 2021,11, 11611 28 of 30
16.
Salmon, P.M.; Stanton, N.A.; Walker, G.H.; Baber, C.; Jenkins, D.P.; McMaster, R.; Young, M.S. What really is going on? Review of
situation awareness models for individuals and teams. Theor. Issues Ergon. Sci. 2008,9, 297–323. [CrossRef]
17.
Salmon, P.M.; Stanton, N.A.; Walker, G.H.; Jenking, D.P. Distributed Situation Awareness: Theory, Measurement and Application to
Teamwork; MPG Books Group: Farnham, UK, 2009.
18.
Salmon, P.M.; Stanton, N.A.; Walker, G.H.; Jenkins, D.; Ladva, D.; Rafferty, L.; Young, M. Measuring situation awareness in
complex systems: Comparison of measures study. Intern. J. Ind. Ergon. 2009,39, 490–500. [CrossRef]
19.
Stanton, N.A.; Salmon, P.M.; Walker, G.H.; Jenkins, D.P. Is situation awareness all in the mind? Theor. Issues Ergon. Sci.
2010
,
11, 29–40. [CrossRef]
20.
Stanton, N.A.; Salmon, P.M.; Walker, G.H.; Salas, E.; Hancock, P.A. State-of-science: Situation awareness in individuals, teams and
systems. Ergonomics 2017,60, 449–466. [CrossRef]
21.
Salmon, P.M.; Read, G.J.M.; Walker, G.H.; Lenne, M.G.; Stanton, N.A. Distributed Situation Awareness in Road Transport; CRC Press:
London, UK, 2019.
22. Hancock, P.A. Some pitfalls in the promises of automated and autonomous vehicles. Ergonomics 2019,62, 479–495. [CrossRef]
23. Van der Laar, P.; Tretmans, J.; Borth, M. (Eds.) Situation Awareness with Systems of Systems; Springer: New York, NY, USA, 2013.
24. Gawron, V.J. Human Performance and Situation Awareness Measures, 3rd ed.; CRC Press: London, UK, 2019.
25. Wise, J.A.; Hopkin, V.D.; Garland, D.J. (Eds.) Handbook of Aviation Human Factors, 2nd ed.; CRC Press: London, UK, 2010.
26. Angelov, P. (Ed.) Sense and Avoid in UAS: Research and Applications; John Wiley & Sons: Chichester, UK, 2012.
27.
Sarter, N.B.; Woods, D.D. Situation awareness: A critical but ill-defined phenomenon. Int. J. Aviat. Psychol.
1991
,1, 45–57.
[CrossRef]
28.
Ackerman, K.A.; Talleur, D.A.; Carbonari, R.S.; Xargay, E.; Seefeldt, B.D.; Kirlik, A.; Hovakimyan, N.; Trujillo, A.C. Automation
situation awareness display for a flight envelope protection system. J. Guid. Control. Dyn. 2017,40, 964–980. [CrossRef]
29.
Nguyen, T.; Lim, C.P.; Nguyen, N.D.; Gordon-Brown, L.; Nahavandi, S. A review of situation awareness assessment approaches
in aviation environment. IEEE Syst. J. 2019,13, 3590–3603. [CrossRef]
30.
Wei, B.; Nener, B.D. Multi-sensor space debris tracking for situational awareness with labelled random finite sets. IEEE Access
2019,7, 36991–37003. [CrossRef]
31.
Amos, B.; Dinh, L.; Cabi, S.; Rothorl, T.; Colmenarejo, S.G.; Muldal, A.; Erez, T.; Tassa, Y.; de Freitas, N.; Denil, M. Learning
awareness models. arXiv 2018, arXiv:1804.06318.
32.
Endsley, M.R.; Rodgers, M.D. Distribution of attention, situation awareness and workload in a passive air traffic control task:
Implications for operational errors and automation. Air Traffic Control. Q. 1998,6, 21–44. [CrossRef]
33.
Jones, D.G.; Endsley, M.R. Use of real-time probes for measuring situation awareness. Int. J. Aviat. Psychol.
2004
,14, 343–367.
[CrossRef]
34.
Kim, Y.-G.; Chang, W.; Kim, K.; Oh, T. Development of an situation awareness software for autonomous unmanned aerial vehicles.
J. Aerosp. Syst. Eng. 2021,15, 36–44.
35.
Sampedro, C.; Rodriguez-Ramos, A.; Bavle, H.; Carrio, A.; de la Puente, P.; Campoy, P. A fully-autonomous aerial robot for
search and rescue applications in indoor environments using learning-based techniques. J. Intell. Robot. Syst.
2019
,95, 601–627.
[CrossRef]
36.
Jeon, M.-J.; Park, H.-K; Jagvaral, B.; Yoon, H.-S.; Kim, Y.-G.; Park, Y.-T. Relationship between UAVs and ambient objects with
threat situational awareness through grid map-based ontology reasoning. Int. J. Comput. Appl. 2019,41, 1–16. [CrossRef]
37.
Ruano, S.; Cuevas, C.; Gallego, G.; Garcia, N. Augmented reality tool for the situational awareness improvement of UAV
operators. Sensors 2017,17, 1–16. [CrossRef]
38.
Cuevas, H.M.; Aguiar, M. Assessing situation awareness in unmanned aircraft systems operations. Intern. J. Aviat. Aeronaut.
Aerosp. 2017,4. [CrossRef]
39.
McAree, O.; Chen, W.-H. Artificial situation awareness for increased autonomy of unmanned aerial systems in the terminal area.
J. Intell. Robot. Syst. 2013,70, 545–555. [CrossRef]
40.
McAree, O.; Aitken, J.; Veres, S. Towards artificial situation awareness by autonomous vehicles. IFAC Pap.
2017
,50, 7038–7043.
[CrossRef]
41.
McAree, O.; Aitken, J.; Veres, S. Quantifying situation awareness for small unmanned aircraft. Aeronaut. J.
2018
,122, 733–746.
[CrossRef]
42.
Liu, C.; Coombes, M.; Li, B.; Chen, W.-H. Enhanced situation awareness for unmanned aerial vehicle operating in terminal areas
with circuit flight rules. J. Aerosp. Eng. 2016,230, 1683–1693. [CrossRef]
43.
Cavaliere, D.; Loia, V.; Saggese, A.; Senatore, S.; Vento, M. Semantically enhanced UAVs to increase the aerial scene understanding.
IEEE Trans. Syst. Man Cybern. 2019,10, 555–567. [CrossRef]
44. Bocaniala, C.D.; Sastry, V.V.S.S. On enhanced situational awareness models for unmanned aerial systems. In Proceedings of the
2010 IEEE Aerospace Conference, Big Sky, MT, USA, 6–13 March 2010; pp. 1–14.
45.
Freedman, S.T.; Adams, J.A. The inherent components of unmanned vehicle situation awareness. In Proceedings of the 2007 IEEE
International Conference on Systems, Man and Cybernetics, Montreal, QC, Canada, 7–10 October 2007; pp. 973–977.
46.
He, Y. Mission-driven autonomous perception and fusion based on UAV swarm. Chin. J. Aeronaut.
2020
,33, 2831–2834. [CrossRef]
47. Hill, V.W.; Thomas, R.W.; Larson, J.D. Autonomous situation awareness for UAS swarms. arXiv 2021, arXiv:2104.08904.
Appl. Sci. 2021,11, 11611 29 of 30
48.
Frische, F.; Lüdtke, A. SA-Tracer: A tool for assessment of UAV swarm operator SA during mission execution. In Proceedings of
the 2013 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support
(CogSIMA), San Diego, CA, USA, 25–28 February 2013; pp. 203–211.
49.
Geraldes, R.; Goncalves, A.; Lai, T.; Villerabel, M.; Deng, W.; Salta, A.; Nakayama, K.; Matsuo, Y.; Prendinger, H. UAV-based
situational awareness system using deep learning. IEEE Access 2019,7, 122583–122594. [CrossRef]
50.
Carrio, A.; Sampedro, C.; Rodriguez-Ramos, A.; Campoy, P. A review od deep learning methods and applications for unmanned
aerial vehicles. J. Sens. 2017, 1–13. [CrossRef]
51.
Igonin, D.; Kolganov, P.; Tiumentsev, Y. Providing situational awareness in the control of unmanned vehicles. Stud. Comput. Intell.
2021,925, 125–134.
52.
Zhang, J.; Jia, Y.; Zhu, D.; Hu, W.; Tang, Z. Study on the situational awareness system of mine fire rescue using Faster Ross
Girshick convolutional neural network. IEEE Intell. Syst. 2020,35, 54–61. [CrossRef]
53.
Peng, H.; Zhang, Y.; Yang, S.; Song, B. Battlefield image situational awareness application based on deep learning. IEEE Intell.
Syst. 2020,35, 36–43.
[CrossRef]
54.
Almeida, R.B.; Junes, V.R.C.; da Silva Machado, R.; da Rosa, D.Y.L.; Donato, L.M.; Yamin, A.C.; Pernas, A.M. A distributed event-
driven architectural model based on situational awareness applied on internet of things. Inf. Softw. Technol.
2019
,111, 144–158.
[CrossRef]
55.
Schaefer, K.E.; Chen, J.Y.C.; Szalma, J.L.; Hancock, P.A. A meta-analysis of factors influencing the development of trust in
automation: Implications for understanding autonomy in future systems. Hum. Factors 2016,58, 377–400. [CrossRef]
56. Blom, H.A.P.; Sharpanskykh, A. Modelling situation awareness relations in a multiagent system. Appl. Intell. 2015,43, 412–423.
57.
Van der Heijden, F.; Duin, R.P.; de Ridder, D.; Tax, D.M.J. Classification, Parameter Estimation and State Estimation; John Wiley &
Sons: Hoboken, NJ, USA, 2004.
58. Hajiyev, C. State Estimation and Control for Low-Cost Unmanned Aerial Vehicles; Springer: Berlin/Heidelberg, Germany, 2010.
59.
Isermann, R.; Münchhoh, M. Identification of Dynamic Systems: An Introduction with Applications; Springer: Berlin/Heidelberg,
Germany, 2011.
60. Ljung, L. System Identification: Theory for the User, 2nd ed.; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 1999.
61.
Wang, L.; Garnier, H. (Eds.) System Identification, Environment Modelling, and Control System Design; Springer: Berlin/Heidelberg,
Germany, 2012.
62.
Nelles, O. Nonlinear System Identification: From Classical Approaches to Neural Networks and Fuzzy Models, 2nd ed.; Springer:
Berlin/Heidelberg, Germany, 2020.
63.
Billings, S.A. Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-temporal Domains; John Wiley &
Sons: Chichester, UK, 2013.
64. Klein, V.; Morelli, E.A. Aircraft System Identification: Theory and Practice; AIAA, Inc.: Reston, VA, USA, 2006.
65. Jategaonkar, R.V. Flight Vehicle System Identification: A Time Domain Methodology; AIAA, Inc.: Reston, VA, USA, 2006.
66.
Tischler, M.B.; Remple, R.K. Aircraft and Rotorcraft System Identification: Engineering Methods with Flight-Test Examples; AIAA, Inc.:
Reston, VA, USA, 2006.
67.
Talebi, H.A.; Abdollahi, F.; Patel, R.V.; Khorasani, K. Neural Network-Based State Estimation of Nonlinear Systems: Application to Fault
Detection and Isolation; Springer: Berlin/Heidelberg, Germany, 2004.
68. Palade, V.; Bocaniala, C.D. Computational Intelligence in Fault Diagnosis; Springer: Berlin/Heidelberg, Germany, 2006.
69.
Blanke, M.; Kinnaert, M.; Lunze, J.; Staroswiecki, M. Diagnosis and Fault-Tolerant Control; Springer: Berlin/Heidelberg, Ger-
many, 2006.
70.
Patan, K. Artificial Neural Networks for the Modelling and Fault Diagnosis of Technical Processes; Springer: Berlin/Heidelberg,
Germany, 2008.
71.
Sobhani-Tehrani, E.; Khorasani, K. Fault Diagnosis of Nonlinear Systems Using a Hybrid Approach; Springer: Berlin/Heidelberg,
Germany, 2009.
72. Tiumentsev, Y.V.; Egorchev, M.V. Neural Network Modeling and Identification of Dynamical Systems; Elsevier: London, UK, 2019.
73.
Katok, A.; Hasselblatt, B. Introduction to the Modern Theory of Dynamical Systems; Cambridge University Press: Cambridge,
UK, 1995.
74. Ljung, L.; Glad, T. Modeling of Dynamic Systems; Prentice-Hall, Inc.: Englewood Cliffs, NJ, USA, 1994.
75. Khalil, H.K. Nonlinear Systems, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2002.
76.
Brusov, V.S.; Tiumentsev, Y.V. Neural Network Based Modeling of Aircraft Motion; The MAI Publishing House: Moscow, Russia, 2016.
(In Russian)
77. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2017.
78. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction, 2nd ed.; The MIT Press: Cambridge, MA, USA, 2018.
79.
Dong, H.; Ding, Z.; Zhang, S. (Eds.) Deep Reinforcement Learning: Fundamentals, Research and Applications; Springer: Singapore, 2020.
80.
Kamalapurkar, R.; Walters, P.; Rosenfeld, J.; Dixon W. Reinforcement Learning for Optimal Feedback Control: A Lyapunov-Based
Approach; Springer: Berlin/Heidelberg, Germany, 2018.
81.
Ahrens, C.D.; Henson, R. Essentials of Meteorology Today: An Invitation to the Atmosphere, 8th ed.; Cengage Learning: Boston, MA,
USA, 2018.
Appl. Sci. 2021,11, 11611 30 of 30
82.
Ahrens, C.D.; Henson, R. Meteorology Today: An Introduction to Weather, Climate, and the Environment, 12th ed.; Cengage Learning:
Boston, MA, USA, 2019.
83. Saha, K. The Earth’s Atmosphere: Its Physics and Dynamics; Springer: Berlin/Heidelberg, Germany, 2008.
84. Wells, N.C. The Atmosphere and Ocean: A Physical Introduction, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2012.
85. Njoku, E.G. (Ed.) Encyclopedia of Remote Sensing; Springer: Berlin/Heidelberg, Germany, 2014.
86.
Thenkabail, P.S. (Ed.) Remote Sensing Handbook. Vol. I: Remotely Sensed Data Characterization, Classification, and Accuracies; CRC
Press: London, UK, 2015.
87.
Thenkabail, P.S. (Ed.) Remote Sensing Handbook. Vol. II, Land Resources Monitoring, Modeling, and Mapping with Remote Sensing; CRC
Press: London, UK, 2016.
88.
Thenkabail, P.S. (Ed.) Remote Sensing Handbook. Vol. III, Remote Sensing of Water Resources, Disasters, and Urban Studies; CRC Press:
London, UK, 2016.
89.
Adams, J.B.; Gillespie, A.R. Remote Sensing of Landscapes with Spectral Images: A Physical Modeling Approach; Cambridge University
Press: Cambridge, UK, 2006.
90. Fraden, J. Handbook of Modern Sensors: Physics, Designs, and Applications, 5th ed.; Springer: New York, NY, USA, 2016.
91.
Krasilshchikov, M.N.; Sebryakov, G.G. (Eds.) Modern Information Technologies Applying to Maneuverable Unmanned Vehicles Guidance,
Navigation and Control Problems; Fizmatlit: Moscow, Russia, 2009. (In Russian)
92.
Liggins, M.E.; Hall, D.L.; Llinas, J. (Eds.) Handbook of Multisensor Data Fusion: Theory and Practice, 2nd ed.; CRC Press: London,
UK, 2009.
93. Ellis, G. Observers in Control Systems; Academic Press: Amsterdam, The Netherlands, 2002.
94.
The FlightGear Manual, September 2021. Available online: http://flightgear.sourceforge.net/getstart-en/getstart-en.html
(accessed on 1 December 2021). [CrossRef] [PubMed]
95. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997,9, 1735–1780.
96.
Phillips, D.J.; Wheeler, T.A.; Kochenderfer, M.J. Generalizable intention prediction of human drivers at intersections. In
Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1665–1670. [CrossRef]
97.
Zyner, A.; Worrall, S.; Nebot, E. A recurrent neural network solution for predicting driver intention at unsignalized intersections.
IEEE Robot. Autom. Lett. 2018,3, 1759–1764. [CrossRef] [PubMed]
98.
Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to forget: Continual prediction with LSTM. Neural Comput.
2000
,12, 2451–2471.
99.
Semantic Drone Dataset. Institute of Computer Graphics and Vision, Graz University of Technology (Austria). Available online:
http://dronedataset.icg.tugraz.at (accessed on 1 December 2021).
100.
Aerial Semantic Segmentation Drone Dataset. Available online: https://www.kaggle.com/nunenuh/semantic-drone (accessed
on 1 December 2021).
101.
Fast Image Augmentation Library. Available online: https://github.com/albumentations-team/albumentations (accessed on 1
December 2021).
102.
Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. arXiv
2015
,
arXiv:1505.04597v1.
103. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. arXiv 2017, arXiv:1612.01105v2.
104.
Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv
2017
,
arXiv:1706.05587v3.
105.
Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al.
ImageNet large scale visual recognition challenge. arXiv 2015, arXiv:1409.0575v3.
106.
Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted residuals and linear bottlenecks. arXiv
2019, arXiv:1801.04381v4.
107.
He, K.; Zhang, K.; Ren, S.; Sun, J. Deep residual learning for image recognition. arXiv
2015
, arXiv:1512.03385v1. [CrossRef]
[PubMed]
108. Loshchilov, I.; Hutter, Y. Decoupled weight decay regularization. arXiv 2019, arXiv:1711.05101v3.
109. Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. arXiv 2015, arXiv:1503.02531v1.
110. Gou, J.; Yu, B.; Maybank, S.J.; Tao, D. Knowledge distillation: A survey. arXiv 2021, arXiv:2006.05525v7.
111.
Alkhulaifi, A.; Alsahli, F.; Ahmad, I. Knowledge distillation in deep learning and its applications. PeerJ Comput. Sci.
2021
,7, 1–24.