PreprintPDF Available


One of the many Autonomous Systems (ASs), such as autonomous driving cars, performs various safety-critical functions. Many of these autonomous systems take advantage of Artificial Intelligence (AI) techniques to perceive their environment. But these perceiving components could not be formally verified, since, the accuracy of such AI-based components has a high dependency on the quality of training data. So Machine learning (ML) based anomaly detection, a technique to identify data that does not belong to the training data could be used as a safety measuring indicator during the development and operational time of such AI-based components. Adversarial learning, a sub-field of machine learning has proven its ability to detect anomalies in images and videos with impressive results on simple data sets. Therefore, in this work, we investigate and provide insight into the performance of such techniques on a highly complex driving scenes dataset called Berkeley DeepDrive.
Towards exploring adversarial learning for
anomaly detection in complex driving scenes
Nour Habib[0009000760606177], Yunsu Cho[000900043052040X], Abhishek
Buragohain[0000000295039498], and Andreas Rausch[0000000268506409]
Institute for Software and Systems Engineering, Technische Universität Clausthal,
Arnold-Sommerfeld-Straße 1, Clausthal-Zellerfeld 38678, Germany
{nour.habib, yunsu.cho, abhishek.buragohain,
Abstract. One of the many Autonomous Systems (ASs), such as au-
tonomous driving cars, performs various safety-critical functions. Many
of these autonomous systems take advantage of Artificial Intelligence
(AI) techniques to perceive their environment. But these perceiving com-
ponents could not be formally verified, since, the accuracy of such AI-
based components has a high dependency on the quality of training data.
So Machine learning (ML) based anomaly detection, a technique to iden-
tify data that does not belong to the training data could be used as a
safety measuring indicator during the development and operational time
of such AI-based components. Adversarial learning, a sub-field of ma-
chine learning has proven its ability to detect anomalies in images and
videos with impressive results on simple data sets. Therefore, in this
work, we investigate and provide insight into the performance of such
techniques on a highly complex driving scenes dataset called Berkeley
Keywords: Adversarial Learning, Artificial Intelligence, Anomaly De-
tection, Berkeley DeepDrive(BDD).
1 Introduction
Autonomous systems have achieved tremendous success in various domains, such
as autonomous cars, smart office systems, smart security systems, and surveil-
lance systems. With such advancements, nowadays, autonomous systems have
become very common in our daily life, where we use such systems regularly
even in various safety-critical application domains such as financial analysis.
All these current developments in various autonomous systems are due to in-
creased performance in the field of machine learning techniques. Many of these
autonomous systems have been developed as hybrid systems combining classi-
cally engineered subsystems with various Artificial (AI) techniques. One such
example is autonomous driving Vehicles. In autonomous vehicles, the trajectory
planning subsystem is usually designed in classic engineered ways, whereas the
arXiv:2307.05256v1 [cs.CV] 17 Jun 2023
2 N. Habib et al.
perception part of such vehicles to understand the surrounding environment is
based on AI techniques. Both these parts are combined, so they could perform
as a single system to execute various safety-critical functions.[22]
During the design and development phase of the perception subsystems in
autonomous vehicles, first perceptions engineers label the training data. Then
they use these labeled data and machine learning frameworks to train an inter-
preted function. To explain this concept of training an interpreted function, we
can take the example of a perception task, where the trained interpreted function
can perform the classification of traffic signal images into its correct traffic sign,
as illustrated in Figure. 1. In general training data is the mapping of a finite set
of input data to its corresponding output information. In this case, input data
can be compared to images of traffic signs and output information can be com-
pared to its correct label which is traffic sign class. So once this machine-learned
interpreted function is trained using this traffic sign training data, during the
test time, it could process any image to one of the specified traffic sign classes
in the training data’s output information. However if we consider such machine
learned interpreted function to be a part of a real Autonomous vehicle’s percep-
tion subsystem stack, one important question arises, that is to what extent, the
output ifml(x)of such interpreted function ifml is reliable or safe enough, so
that other subsystems in the AVs can rely on.[15]
Fig. 1: Operational time check of dependability requirements.[14]
The process behind the development of classical engineered systems vastly
differs from AI-based systems[16]. During the development process of a classi-
cal engineered system, first, semi-formal requirements specifications are created.
Towards exploring adversarial learning for anomaly detection ... 3
Such requirement specifications are later used during the testing and verification
process of the developed engineered system. But in the case of the development
phase of AI-based systems, the creation of such requirement specifications is
never done. And instead, it is replaced by a collection of training data. But such
training data always does not contain all the information required and is some-
times incomplete [15]. In other cases, due to inaccurate labeling, some amount
of these training data consist of errors.,
The manufacturers of AI-based systems especially, in a domain like ASs,
need to meet very high standards, so that they can satisfy the requirements of
the regulations imposed on them. But the current engineering methods struggle
to guarantee the dependability requirements from the perspective of safety se-
curity, and privacy in a cost-effective manner. Because of such limitations, the
development engineers struggle to completely guarantees all the requirements of
such AI-based systems during the development phase. Aniculaesei et al.[4] intro-
duced the concept of a dependability cage, which can be used to test a system’s
dependability requirements both during the development and operational phase.
In the concept of a dependability cage as is shown in the Figure. 2, the Quan-
titative monitor plays an important role to validate the output of the machine-
learned interpreted functions. The concept of the Quantitative Monitor is such
that, it tries to check if the sensor data currently being configured by the sys-
tem as the input data for the ML-interpreted function is semantically similar
enough to the ground truth of the training data, used during the development
time of the machine learned interpreted function. In this way, the Quantitative
monitor tries to verify if the output information of the machine-learned inter-
preted function is correct and safe. If the current input data to the interpreted
function is not semantically similar enough to the training data, then, it is an
indication that the output information of this function is not safe or reliable
to be used for making safety-critical tasks. Anomaly detection is a promising
method, which is used to identify if the input test data (x), is similar or not sim-
ilar to the ground truth(y)of training data based on some semantic similarity
metric (semantic_similar(x, y)thresholdsemantic_similarity)(cf. Figure. 1).
With the evolution of the Artificial intelligence field, many promising methods
for anomaly detection such as Generative Adversarial Networks, Autoencoder,
and Variational Autoencoder have been experimented to provide the measure-
ment of semantic similarity between the test data and the training data.
The fundamental principle for using auto-encoder for the task of anomaly
detection was introduced by Japkowicz et al.[8]. Here the auto-encoder is first
trained to minimize the distance error between the input image to the auto-
encoder and its output, that is the reconstructed image. And once trained, during
the test time, the auto-encoder takes a new image as input and tries to produce
a reconstructed image closer to the original input image. Now if the distance
error between the input and the reconstructed output image is higher than a
certain threshold, then the input image is considered to be an anomaly, or else,
it is classified as not an anomaly. Amini et al.[2] also demonstrated the same
4 N. Habib et al.
concept but with training data containing daylight driving scenes and anomaly
images consisting of nighttime driving scenes.
But unlike Autoencoder approaches, many of Generative adversarial net-
works(GANs) based anomaly detection follows a slightly different underlying
principle for discriminating between anomaly and not anomaly data. Generative
Adversarial Networks (GANs) have some advantages over Autoencoders and
Variational Autoencoders. One of the main advantages of GANs for anomaly
detection is their ability to generate realistic data samples [19]. In a GAN, a
generator network learns to create new samples that are similar to the training
data, while a discriminator network learns to distinguish between real and fake
samples[19]. This means that a GAN can learn to generate highly realistic sam-
ples of normal data, which can be useful for detecting anomalous samples that
differ significantly from the learned distribution[19]. Another advantage of GANs
is that they can be trained to detect anomalies directly, without the need for a
separate anomaly detection algorithm. This is achieved by training the discrim-
inator network to not only distinguish between real and fake samples but also
between normal and anomalous samples. This approach is known as adversarial
anomaly detection and has been shown to be effective in detecting anomalies in
a variety of domains[19]. The anomaly detection using GANs-based techniques
will further be discussed in section 3 of the paper.
However, most of these papers have evaluated their techniques on simple
data sets such as MNIST [10], CIFAR [9], UCSD [11]. Images in such data sets
have very simple features, for instance, in MNIST, only one number is present per
image. Similar is the case in CIFAR, where one object class is present per image.
Another issue with these data sets is that they are of low resolution. Because in
the real world, the driving scene images contain various class of objects, taken in
various lighting conditions and weather such as raining day, night, day, etc. So
the application of such anomaly detection techniques on complex driving scenes
is still needed to be evaluated.
So to evaluate the performance of these GAN-based anomaly detection tech-
niques on real-world driving scenes, we will first reproduce their work using their
settings. Once we are able to reproduce their work, then we will evaluate their
technique on a complex real-world driving scenes dataset of our choice. For the
purpose of this evaluation, we have selected the BDD dataset [23] as our driving
scenes dataset. The reason behind using this dataset is, it has high variability
in terms of class of objects, number of objects, weather conditions, and environ-
The rest of the paper is organized as follows: In Section 2, we provide a brief
introduction to dependability cage monitoring architecture for autonomous sys-
tems; In Section 3, we provide an overview of GAN-based novelty detection
works; in Section 4, the research questions for our work are introduced; In Sec-
tion 5, we present the description, concept, and dataset for the selected GAN
technique; In Section 6, we present the evaluation of the selected technique; in
Section 7, we provide a short summary of the contribution of our work and future
work in the direction of GAN based technique for anomaly detection.
Towards exploring adversarial learning for anomaly detection ... 5
2 Dependability Cage - A brief overview
To overcome the challenges posed by engineering-dependent autonomous sys-
tems, the concept of a dependability cage has been proposed in [4][13][6][7][12].
These dependability cages are derived by the engineers from existing develop-
ment artifacts. The fundamental concept behind these dependability cages is
a continuous monitoring framework for various subsystems of an autonomous
system as shown in Figure 2.
Firstly a high-level functional architecture of an Autonomous Vehicle(AV)
has been established. It consists of three parts 1. environment self-perception, 2.
situation comprehension, and action decision. 3. trajectory planning and vehicle
control [7][6][12]. The continuous monitoring framework resolves two issues that
could arise. 1. shows the system the correct behavior based on the dependability
requirements. the component handling this issue is called the qualitative monitor.
2. Makes sure the system operates in a situation or environment that has been
considered and tested during the development phase of the autonomous system.
The component handling this issue is called the quantitative monitor as shown
in Figure 2.
For the monitors to operate reliably, both of them require consistent and ab-
stract data access from the system under consideration. This is handled by the
input abstraction and output abstraction components. In Figure 2, these com-
ponents are shown as the interfaces between the autonomous systems and the
two monitors, as monitoring interface. Both these input and output abstraction
components convert the autonomous systems’ data into an abstract representa-
tion based on certain user-defined interfaces. The type and data values of this
representation are decided based on the requirements specification and the de-
pendability criteria derived by the engineers, during the development phase of
the autonomous systems.[15]
The quantitative monitor observes the abstract representation of the envi-
ronment, it receives from the autonomous system’s input and output abstraction
components. For every situation, the monitors evaluate the abstract situation,
in real-time, if it is known or tested during the development time of the au-
tonomous system. The information about these tested situations is provided by
a knowledge base to the quantitative monitor.
Since the study in this work is carried out taking quantitative monitoring
into consideration, we will talk about quantitative monitoring in detail. For a
better understanding of the qualitative monitor, refer to the work of Rausch et
al. [15]. If one of the above-mentioned monitors detects any unsafe and incor-
rect behavior in the system, an appropriate safety decision has to be taken to
uphold the dependability requirements. For this purpose, the continuous mon-
itoring framework has a fail operation reaction component which receives the
information from both the monitors and must bring the corrupted systems to
a safe state. One such fail operational reaction could be a graceful degradation
of the autonomous system as stated in [5]. As part of the safety decision, the
system’s data will be recorded automatically. These recorded data then can then
6 N. Habib et al.
be transferred back to the system development so that, they could be analyzed
and be used to resolve any unknown system’s faulty behaviors.
Fig. 2: Continuous Monitoring framework in the dependability cage. [15]
Finally to realize such a Quantitative Monitor, an efficient solution to clas-
sify between known and unknown sensor data is required. As mentioned in the
previous section, for this we will first, re-evaluate some of the state-of-the-art
generative adversarial network (GAN) based anomaly detection techniques on
data sets published in their work. Once we successfully complete the first step,
we will evaluate their performance on a driving scenes dataset quite similar to
scenes encountered by an autonomous vehicle in the real world. A study of these
anomaly detection techniques and the selection of some of these techniques for
the purpose of the evaluation will be described in the following sections.
3 Previous works on GAN-based anomaly detection
Anomaly detection has become a genuine concern in various fields, such as se-
curity, communication, and data analysis, which makes it a significant interest
to researchers. In this paper, we wanted to explore its ability to detect unknown
and known driving scenes from the perspective of an autonomous vehicle. Un-
known driving scenes can be considered as anomalies that were not considered
during the development time of the autonomous vehicle. Whereas known driving
scenes are detected as not anomaly, which was considered during the develop-
ment time of the autonomous vehicle. Various theoretical and applied research
have been published in the scope of detecting anomalies in the data. In the
following subsections, we will review some of the research papers from which,
we selected the approaches. These selected approaches will be our reference for
further evaluation in this paper later on.
Towards exploring adversarial learning for anomaly detection ... 7
3.1 Generative adversarial network-based novelty detection using
minimized reconstruction error
This paper provides an investigation of the traditional semi-supervised approach
of deep convolutional generative adversarial networks (DC-GAN) for detecting
the novelty in both the MNIST digit database and the Tennessee Eastman (TE)
Dataset [21]. Figure 3 presents the structure of the network of DC-GAN that has
been used in the paper. The generator used a random vector Z (Latent space) as
input for generating samples G(Z), and the discriminator discriminates whether
those samples belong to the Dataset distribution, i.e., indicating them as not-
novel images, or they don’t belong to the Dataset distribution and they are
indicated as novel images[21]. Only data with normal classes were used during
the training, so the discriminator learns the normal features of the dataset to
discriminate the outlier in the data during the evaluation. The loss function of
GAN that is used in the paper is minimax as presented in equation 1 [21].
DV(D, G) = ExPdata(x)[logD(x)] + ExPz(z)[log(1 D(G(z)))] (1)
The generator tries to minimize the error of this loss function and reconstruct
better images G(z), while the discriminator D(X) tries to maximize the error of
this loss function and indicate the generated samples as fake images [21]. Figure
3 presents the structure of the network of DC-GAN that has been used in the
Fig. 3: The structure of DC-GAN [21]
The discriminator classifies whether those samples belong to the dataset dis-
tribution, i.e., indicating them as not-novel images, or they don’t belong to the
dataset distribution and they are indicated as novel images [21]. Data with nor-
mal classes were used during the training so the discriminator learns the normal
features of the dataset to discriminate the outlier in the data during the evalu-
ation [21]. The loss function of GAN that is used in the paper is the minimax
loss function as presented in equation 1. The evaluation metrics that are used
for model evaluation of the MNIST dataset are the novelty score fg(x) or called
the G-score, and the D-score fd(x)[21]. The G-score is calculated as presented in
equation 2 to minimize the reconstruction error between the generated sample
8 N. Habib et al.
G(z) and the reference actual image x. While the D-score is the discriminator
output( decision), and it is varied between 0 (Fake - Novel) to 1 (real - not
novel)[21] as presented in equations 2 and 3 [21].
fg(x) = min
fd(x) = D(x)(3)
The paper applied another approach using Principal Component Analysis
(PCA)-based novelty detection methods on the data for benchmarking. And it
used Hotelling’s T2 and squared prediction error (SPE) statistics for comparison.[21]
The applied approaches in this paper were able to detect the anomaly success-
fully and with high accuracy.
3.2 Unsupervised and Semi-supervised Novelty Detection using
Variational Autoencoders in Opportunistic Science Missions
In the context of ensuring the safety of the robots, which are sent on scientific
exploratory missions to other planets, and ensuring that the robots reach their
goals and carry out the desired investigation operations without performing any
unplanned actions or deviating from the desired goals, this paper provided an
unsupervised and semi-supervised approach in the anomaly detection area [20].
The approach is based on Variational Autoencoders (VAE) model and focuses
on detecting the anomaly in the camera data using the data from previous sci-
entific missions, besides providing a study about the VAE-based loss functions
for generating the best reconstruction errors in detecting the anomaly features
[20]. The structure of the novelty detection approach that is used in the pa-
per is presented in figure 4. Figure 4 presented the samples generator, both the
semi-supervised model and the unsupervised model, for calculating the anomaly
Fig. 4: The structure of GAN for novelty detection approach. [20]
The paper used a variety of different losses for best performance and was
able to provide comparable and state-of-the-art results using the novelty score,
which is the output of the applied neural networks.
Towards exploring adversarial learning for anomaly detection ... 9
3.3 Adversarially Learned One-Class Classifier for Novelty
This paper provides a one-class classification model in terms of separating the
outlier samples in the data from inlier ones. The approach in the paper is based
on end-to-end Generative adversarial networks in a semi-supervised manner with
slight modifications in the generator[17]. The approach in the paper is meant to
encode the typical normal images to their latent space before decoding them
and reconstructing them again. The reconstructed images are sent as input to
the Discriminator network for learning the normal features in those images [17].
Based on that, anomaly scores are given to the corresponding images to help
detect whether those images are considered novel or normal, as presented in
figure 5.
Fig. 5: The Generative adversarial Network Structure. [17]
3.4 GANomaly: Semi-Supervised Anomaly Detection via
Adversarial Training
This paper follows up on many other approaches [3] to investigate, how in-
verse mapping the reconstructed image to latent space is more efficient and
objective than the reconstruction error between the original image and recon-
structed image for anomaly detection purpose[1]. The generative characteristics
of the variational autoencoder give the ability to analyze the data to deter-
mine the anomaly’s cause. This approach takes into account the distribution
of variables[1]. [19] hypothesizes that the latent space of generative adversar-
ial networks represents the accurate distribution of the data[1]. The approach
proposed remapping the GAN based on the latent space. This approach [24] pro-
vided statistically and computationally ideal results by simultaneously mapping
from image space to latent space[1]. This paper proposes a genetic anomaly de-
tection architecture comprising an adversarial training framework based on the
previous three approaches. The approach used normal images(simple images in
terms of size and the number of classes that contain) for the training (MNIST
dataset and CIFAR dataset), providing excellent results.
10 N. Habib et al.
For the purpose of our further evaluation and investigations, in this work,
we will only consider GANomaly from subsection 3.4. We selected the approach
of GANomaly since it showed very promising results in terms of accuracy on
the data sets used in their work. Another interesting factor for considering their
work is that they used the distance error between the latent space of both the
original image and its corresponding reconstructed output as a factor to find the
anomaly images.
4 Research questions in this work
An anomaly in the data is considered a risk that leads to unexpected behavior in
the system. It may lead to incorrect results; in some cases, it can be catastrophic
and threatening. An anomaly is a deviation from the dataset distribution, an
unexpected item, event, or technical glitch in the dataset that deviates from
normal behavior or the standard pattern of the data. This deviation in the data
may lead to the erratic behavior of the system and abnormal procedures outside
the scope that the system was trained to perform. Therefore, the discovery of
anomalies has become an interest to many researchers due to its significant role
in solving real-world problems, such as detecting abnormal patterns in the data
and giving the possibility to take prior measures to prevent wrong actions from
the Camera. Inspired by the work of Rausch et al.[15], where research was done
with the motivation to detect anomalies in the driving scene for the safety of
the autonomous driving system. Their research was successfully validated and
proved with the image dataset MNIST. The approach was able to reconstruct
the input images using a fully-connected autoencoder network; the autoencoder
network was able to learn the features of not novel images to be used later as a
factor to discriminate the novelty images from the not novelty images.
In this work, we will re-evaluate the approach GANomaly, as mentioned at
the end of the previous section, for the task of anomaly detection. The approach
was applied to some simple datasets MNIST and CIFAR. The approach was
able to reconstruct the input images correctly and learn the features for discrim-
inating the anomalies and not anomalies on MNIST and CIFAR. As part of our
work, GANomlay will be applied to a more complex real-world driving scene
dataset Berkeley DeepDrive [23]. The complexity of the images in this dataset,
such as the dimension of the image, RGB color channel, and the number of
classes in each image, poses a challenge for the approach. We have formulated
the contribution of our work in the following research questions (RQ) below.
RQ.1: Can we reproduce the work of GANomaly on one of their used datasets?
RQ.2: Does such GAN approach have the ability to reconstruct high dimen-
sional RGB driving scene images?
RQ.3: Can such GAN approach be applied on highly complex driving scenes
data set for task of anomaly detection?
Towards exploring adversarial learning for anomaly detection ... 11
5 Evaluation process for this work
In this section, we will explain the GANomaly architecture, the data sets, the
training process, and the evaluation metric considered, as part of the evaluation
5.1 GANnomaly architecture:
GANomaly is an unsupervised approach that is derived from GAN, and it was
developed for anomaly detection purposes. The approach structure is a follow-
up to the approach implemented in the paper.[1] The approach consists of three
networks. The overall structure of GANomaly is illustrated in figure 6. The
generator network G, which consists of encoder model GEand decoder model
GD, is responsible for learning the normal data distribution which is free of
any outlier classes, and generating realistic samples. The Encoder network E
maps the reconstructed image ˆ
Xto the latent space ˆ
Zand finds the feature
representation of the image. The discriminator network D classifies the image,
whether it is real or fake.
Fig. 6: The structure of GANomaly [1]
Generator G is an encoder-decoder model. The encoder is responsible for
compressing the input sample and reducing the dimensions to vector Z (Latent
space), which represents the most important features of the input sample. The
decoder, on the other side, decompresses the latent space and reconstructs the
input sample as realistically as possible.
The training flow of the Generator is as follows: The Generator Encoder GE
reads the input data X and maps it to its latent space Z, the bottleneck of the
autoencoder, using three sequence groups of layers (convolutional layer, batch
normalization and finally, LeakyRelu layer), downscaling the data to the smallest
12 N. Habib et al.
dimensions that should contain the best representation of the data, having the
important features of the data. The Generator Decoder GDdecodes the latent
space Z and reconstructs the image again as ˆ
X.GDacts like the architecture of
DCGAN Generator, using three groups of layers (Deconvolutional layer, batch
normalization, and finally, Relu layer), followed by the final layer, Tanh layer,
so the values normalized between [-1, 1], upscales the vector Z and reconstructs
the input image as ˆ
X. The Generator reconstructs the image ˆ
on Latent space Z = GE(X).
The Encoder E acts exactly like the Generator Encoder GE. However, E
downscales the reconstructed image ˆ
Xto its latent space ˆ
Zwith a different
parametrization than GE. E learns to compress ˆ
Xto the smallest dimension,
which should have the best representation of ˆ
Xbut with its parametrization.
The dimension of ˆ
X)is exactly like the dimension of Z = GE(X). The
Encoder is essential for the testing stage as it is part of calculating the anomaly
score of the images.
The Discriminator D, which follows the architecture of DCGAN Discrimina-
tor and it is responsible for classifying the images between Fake and Normal,
using five groups of layers ( (convolutional layer, batch normalization, and finally,
LeakyRelu layer), followed at the end with sigmoid layer so the results would
be normalized between [0, 1]. However, in the GANamly approach, anomaly de-
tection does not rely on the Discriminator classification results. The main use
of the Discriminator in this approach is for feature matching using the values in
the last hidden layers before the sigmoid layer. It reduces the distance between
the extracted features of the input image X and the reconstructed image ˆ
feeds the results back to the Generator to improve its reconstruction perfor-
mance. The Discriminator is trained using binary cross entropy loss with target
class 1 for input images X and with target class 0 for the reconstructed images
X. The loss function of the discriminator is as the following equation 4 [1].
[log(D(xi)) + log(1 D(G(xi)))] (4)
The GANomaly approach hypothesizes that after compressing the abnormal
image to its latent space Z = GE(X), the latent space would be free of any
anomalies features. That is because the GEis only trained to compress normal
images, which contain normal classes, during the training stage. As a result, the
Generator Decoder GDwould not be able to reconstruct the anomaly classes
again because the developed parameterization is not suitable for reconstructing
the anomaly classes. Correspondingly, the reconstructed images ˆ
would be free of any anomalies. The Encoder compresses the reconstructed image
X, which it hypothesized that it is free of anomaly classes, to its latent space
Z, which is supposed to be free from anomaly features as well. As a result, the
difference between Z and ˆ
Zwould increase, declaring an anomaly detection in
the image. The previous hypothesis was validated using three different types
of loss functions; each of them optimizes different sub-network, and, as a final
Towards exploring adversarial learning for anomaly detection ... 13
action, the total results of them would be passed to the Generator for updating
its weights.
Adversarial Loss: This Loss has followed the approach that is proposed
by Salimans et al.[18]; feature matching helps to reduce the instability in GANs
training. Using the values of the features in the intermediate layer in the Dis-
criminator to reduce the distance between the features representation of the
input image X, that follows the data distribution θ, and the reconstructed im-
age ˆ
X, respectively. This Loss is also meant to fool the Discriminator that the
reconstructed image is real. Let f represent the function of the output of the
intermediate layer of the Discriminator. The adversarial Loss Ladv is calculated
as illustrated in the equation 5 [1].
Lossadv =EXθ
Contextual Loss: This Loss is meant to improve the quality of the recon-
structed image by penalizing the Generator by calculating the distance between
the input image X and the reconstructed image ˆ
Xas the equation 6 [1]:
Losscon =EXθ
Encoder Loss: This Loss is for reducing the distance between the latent
space Z that is mapped from the original image X using the Generator Encoder
Z= GE(X)and the latent space ˆ
Zthat is mapped from the reconstructed image
Xusing the Encoder ˆ
X)as the equation 7 [1]:
Lossenc =EXθ
The Generator learns to reconstruct images that are free of anomaly classes
by both learning the Generator Encoder and the Encoder to compress normal
features of the images, and they would fail to compress the abnormal features.
As a result, the distance between the features of the normal image and the
reconstructed image would increase and declares the anomaly in the image. The
total Loss that the Generator would depend on it for updating its weights is
calculated as the equation 8 [1].
LossG=wadvLossadv +wcon Losscon +wenc Lossenc (8)
wadv,wcon , and wenc are the weights parameters of the overall loss function
of the Generator, which updates the Generator. The initial weights used in this
approach are wadv = 1, wcon = 20 and wenc = 1
The model is optimized using Adam optimization with a learning rate of
0.0002 with a Momentum of 0.5.
5.2 Datasets
Berkeley DeepDrive (BDD) dataset includes high-resolution images (1280px x
720px) of real-live driving scenes. These images were taken in varying locations
14 N. Habib et al.
(cities streets, highways, gas stations, tunnels, residential, parking places, and
villages), at three different times of the day (daytime, night, dawn), and in six
different weather conditions (rainy, foggy, cloudy, snowy, clear and overcast)[23].
The dataset includes two packages. The first package contains 100k images.
Those images include several sequences of driving scenes besides videos of those
tours. The second package contains 10K images, and it is not a subset of the
100k images, but there are many overlaps between the two packages [23].
BDD’s usage covers many topics, including Lane detection, Road Object
Detection, Semantic Segmentation, Panoptic Segmentation, and tracking. In this
thesis, a 10k package is used, and this package has two components called Things
and Stuff. The Things include countable objects such as people, flowers, birds,
and animals. The Stuff includes repeating patterns such as roads, sky, buildings,
and grass). Mainly 10k package is labeled under 19 different classes of objects
(road, sidewalk, building, wall, fence, pole, traffic light, traffic sign, vegetation,
terrain, sky, person, rider, car, truck, bus, train, motorcycle, bicycle) Figure 7
illustrates some samples of the BDD 10k dataset.
Fig. 7: Samples of BDD dataset [23]
Four of the 19 labels were considered novel objects for our novelty detection
purpose, so the dataset is separated into two parts. The first part, the Novel
dataset, contains images that have one of the following objects listed in their la-
bels (rider, train, motorcycle, and bicycle) The second part, the Normal dataset,
contains images that do not have any of the four novel labels mentioned lately.
6 Evaluation
In this section, we evaluate the performance of GANomaly based on the research
questions, we formulated previously in section 4 and the evaluation process from
the previous section.
Towards exploring adversarial learning for anomaly detection ... 15
6.1 Evaluation for RQ.1:
We replicated the results that are illustrated in the GANomaly reference paper
[1], to ensure that our architecture and training method are efficient in detecting
anomalies using the same dataset as the reference paper. The GANomaly setup
was trained using MNIST Dataset with several anomaly parameterizations. Each
time the GANomaly setup was trained by considering one digit as a novel (ab-
normal) while the other digits were normal. And to provide comparable results,
a quantitative evaluation was applied to the results by calculating the area un-
der the curve (AUC) for the result of each applied neural network to detect the
abnormal digit. In the GANnomaly approach that is applied in this paper, two
types of anomaly scores were applied that indicate the anomalies in the recon-
structed images were calculated. The original method is calculated as presented
in equation 9 which uses the Generator Encoder to map the input image X to
the latent space Z and the Encoder which maps the reconstructed image ˆ
the latent space ˆ
Zand calculate the difference. The blue line in figure 8 indicates
the original method. The second method presented in equation 10, gets an ad-
vantage from the Generator Encoder to map both the original image X and the
reconstructed image ˆ
Xto their latent space Z and ˆ
Zrespectively. The red line
in figure 8 indicates the second method. Both methods are explained in detail
in the section 6.3
Fig. 8: Left: reproduced results of the GANomaly ; right: original results in ref-
erence paper of GANomaly.[1]
As shown in the figure 8, we were able to approximate the results obtained
in the reference research paper with slight differences due to modifications in
parameters and hyper-parameters to get the best quality in reconstructing the
complex Berkeley DeepDrive dataset, to which the approach was applied. Figure
8 shows that the model achieved an excellent anomaly detection result due to the
high AUC(Area under the curve) values for the digits 0,1,2 and 8. The results
were better than the reference paper for digit 1,8, and they were a little less
than the reference paper results for other digits 3, 4, 5, 6, and 7 and equal for
some digits like 8, 9. So in reference to RQ.1, we can conclude, we were able to
successfully reproduce the paper’s results.
16 N. Habib et al.
6.2 Evaluation for RQ.2:
During the training, only normal images were used for training with size 6239
images. For evaluation, the test sub-dataset was used, which includes 902 nor-
mal images, which are free from outlier classes, and 957 abnormal images, which
contain outliers classes. The same training method mentioned on reference page
[1] was followed and replicated using the MNIST dataset. With some modifica-
tions to the architecture of GANomlay, the architecture was able to reconstruct
the images with high resolution with minimum reconstruction error. Figure 9 il-
lustrates the performance of the GANomaly setup in reconstructing the images.
As it is illustrated in figure 9, the top sample contains a motorcycle (abnormal
class) in the bottom corner of the image. The Generator of GANomlay was still
able to reconstruct the abnormal classes as efficiently as the normal classes. The
region of the abnormal class was reconstructed properly without any distortion.
Fig. 9: GANomaly performance in reconstructing the images (top with anomaly
The unsatisfied results regarding detecting anomalies in Berkeley DeepDrive
dataset can be referred to as the high complexity and the unsuitability of selected
abnormal objects in the dataset.
Analyzing the Berkeley DeepDrive dataset images show many challenges and
drawback. Some of the images are labeled with some of the classes defined as ab-
normal in our approach. which indicate existing abnormal classes in the images,
however, the abnormal classes are not visible or recognizable like in figure 10a or
not fully or clearly visible like in figure 10b. In addition, some of the classes that
are defined as abnormal have high similarity in terms of features with classes
that are defined as normal as figure 10c.
The GANomaly approach didn’t succeed in detecting the abnormal classes
in the reconstructed images. The Generator, despite the training process using
only normal classes, was still able to reconstruct the abnormal classes during
testing. This is due to the similarity in features matching between the normal
and abnormal classes as mentioned previously.
Towards exploring adversarial learning for anomaly detection ... 17
(a) The image has a
"Train" label but the train
is not visible.
(b) Abnormal classes are
barely visible or small and
not clear.
(c) The train has high sim-
ilarity in terms of features
with a building block.
Fig. 10: Challenges and Drawback with BDD
So in reference to RQ.2, we could conclude that the GANomaly technique
could successfully reconstruct driving scenes of the Berkeley DeepDrive dataset.
6.3 Evaluation for RQ.3:
GANomaly is one of the newly developed approaches of GAN, and it aims to
detect anomalies in the dataset rather than learning to generate samples that
belong to the original data distribution. Moreover, GANomlay consists of mul-
tiple models, and it does not depend on the discriminator for discriminating the
samples and classifying them into the novel and normal ones.
As mentioned in the GANomaly reference paper [1], it hypothesizes that the
generator should not be able to reconstruct the outliers in the abnormal images.
As a result, the Generator encoder GEcan map the input image X to its latent
space Z without the abnormal features. On the other hand, the encoder should
be able to extract the full features of the image (normal and abnormal) and map
it to its latent space ˆ
Z. So it is expected that the difference between Z and ˆ
increase with increasing the outliers in the image. To evaluate this approach, the
encoder loss Lenc is applied to the test data Dtest by assigning an anomaly score
A(x) or Sxto the given samples x to have a set of anomaly scores as illustrated
in equation 9 [1].
S=si:A(xi), xiDtest
Equation 9: illustrates the anomaly score function [1].
A(xi) =
GE(xi)E( ˆxi)))
Another approach was applied in calculating the anomaly score. During the
Generator testing, the Generator was able to reconstruct the input images suc-
cessfully with its outliers but with slight distortion. So the Generator Encoder
was applied on both the input images and the reconstructed samples, mapping
them to their latent space ˆ
Z; as a result, we expected the abnormal features
would be more extractable in the reconstructed images. The anomaly score func-
tion is transformed to the form, illustrated in equation 10 [1].
18 N. Habib et al.
A(xi) =
GE(xi)GE( ˆxi)))
Finally, the anomaly scores in both approaches were scaled to be between 0
and 1 as the equation 11 [1].
The anomaly score was calculated for each image and then scaled to be
between [0,1] depending on the max and min anomaly score between all the
images. The threshold was selected by calculating the anomaly scores of the
training images and the anomaly scores of 98 abnormal images, which are a
subset of the abnormal images of the test images. The threshold that makes
the best separation between the normal and abnormal images was selected. The
evaluation was applied using a different threshold in the range [0.4, 0.6], and
the threshold 0.5 made the best separation. Figure 11a illustrates the scatter
diagram of the first approach after separation depending on threshold 0.5.
Figure 11b illustrates the scatter diagram of the second approach after sep-
aration depending on threshold 0.5.
(a) Scatter diagram of GANomaly scores
depending on the Generator Encoder and
the Encoder.
(b) Scatter diagram of GANomaly scores
Depending on the Encoder of the Genera-
tor only.
Fig. 11: Comparing results between the two approaches.
The confusion matrix for both GANomaly score approaches was provided
to provide a comparable result with the GANomaly reference paper and the
derived metrics were calculated. Table 1a presents the confusion matrix and
derived metrics results using the Generator encoder for both the input image
and the reconstructed image (we called from now GANomlay score 1) and table
1b presents the confusion matrix and derived metrics results using the Generator
encoder for the input image and the encoder for reconstructed image (we called
from now GANomlay score 2). So in reference to RQ.3, we could conclude that
the GANomaly technique which was successful in detecting anomalies on the
MNIST dataset, was not successful when we applied this technique for anomaly
detection on the driving scenes dataset we consider in our work.
Towards exploring adversarial learning for anomaly detection ... 19
Table 1: Quantitative results of the two approaches.
(a) A(X) = Ge(X) - E(G(X))
A(X) = Ge(X) - E(G(X))
Epoch 190
True Values
Normal Novel
Normal 886 954
Novel 16 3
f1-score: 0.64 ACC: 0.47 P: 0.48 Sn:0.98
(b) A(X) = Ge(X) - Ge(G(X))
A(X) = Ge(X) - Ge(G(X))
Epoch 190
True Values
Normal Novel
Normal 460 448
Novel 442 509
f1-score: 0.50 ACC: 0.52 P: 0.50 Sn:0.50
6.4 Evaluation supporting RQ.3
Another scaling approach was applied for scaling the anomaly scores for the
samples. This approach depends on manipulating the threshold that separates
the novel from normal images. In addition, manipulating the scaling range [Max
and Min] that we depend on with scaling the anomaly scores between [0, 1].
This approach depends on using both the train and test data sets in figuring out
the scaling ranges. The anomaly scores for all the normal samples were scaled
depending on the max and min values between the values of the anomaly scores
of the normal samples as equation 12 [1]. The max and min anomaly scores
for the abnormal samples were calculated as well. The anomaly scores for all
the abnormal samples were scaled depending on the max and min values of the
anomaly scores of the abnormal samples as equation 13 [1].
The scaling equation for anomaly scores for normal images is:
ˆsNr omalImages =simin(Snromal )
The scaling equation for anomaly scores for abnormal images is:
ˆsAbnromalImages =simin(Sabnromal )
The threshold was tested in the range between [0.4 to 0.55]. The threshold,
that gives the best separation of the novel and normal samples, is selected.
That approach provided excellent accuracy in classifying the normal image
from the novel images. Table 2 presents the confusion matrix and the accuracy of
the approach using threshold 0.47 which gives the best separation of the normal
and abnormal images. Figure 12 illustrates how the anomaly scores are scattered.
As stated before, the fundamental idea behind anomaly detection is to detect
unknown occurrences of input data not used during the training of the ML
method( in our case its GANomaly), without any prior knowledge about these
unknown data. In the case of GANomaly, when we used the unknown set of
images to find a threshold value for classifying the input images into anomaly
and not anomaly, it was able to classify most of the known images as not anomaly,
and most of the unknown images as not anomaly. So based on the fundamental
idea of anomaly detection, GANomaly was not able to classify the unknown
driving scenes as an anomaly.
20 N. Habib et al.
Table 2: Results using different scaling ranges for anomaly
Threshold = 0.47
acc = 96.3%
True Values
Normal Novel
Normal 878 44
Novel 24 913
Fig. 12: Scatter diagram for the scaled results using different scaling ranges. Blue
points are declared as normal while pink is declared as fake images.
7 Summary and Outlook
The GANomaly approach that is considered in this work originally demonstrated
its performance of anomaly detection on image datasets MNIST and CIFAR.
These image datasets contain simple images in terms of size, quality, and num-
ber of object classes in the images compared to Berkeley DeepDrive. The method
was applied to RGB 32x32px images of CIFAR, as well as on the MNIST dataset,
which contains gray-scale 28x28 px images. Both these data sets contain only
one class per image. The evaluation of GANomaly is done based on its efficiency
in detecting anomalies on driving scene images from the dataset, Berkeley Deep-
Drive. We used the confusion matrix to evaluate its efficiency. We were able to
follow the architecture of GANomaly [1] and the training method in the reference
paper [1] and we successfully reproduced the work of GANomaly on the MNIST
dataset with our modified settings. Moreover, we were also able to reconstruct,
the driving scenes input images from Berkeley DeepDrive with high resolution
and quality, using this method. However, when we applied GANomaly on the
Berkeley DeepDrive for the task of anomaly detection, it suffered from low accu-
racy in the discriminating stage. The large number of classes in each image, the
various angle of the view of each class in each image, different weather condi-
tions, and light conditions in which images were taken, posed a great challenge
for the method. In addition, the objects selected as unknown objects (trains,
motorcycles, riders, bicycles) compared to known objects, occupied very fewer
pixels space in the images. Another reason for the failure is, the method was able
Towards exploring adversarial learning for anomaly detection ... 21
to reconstruct all the unknown objects that were not used during the training.
This impacted the threshold metric used for classifying between anomaly and
not anomaly images negatively. As a consequence, the threshold metric which
is fundamentally based on mean squared error in the latent space, was not able
to discriminate known images( not anomaly) from unknown images(anomaly).
The method classified almost all the anomaly images in the test data, as nor-
mal(not anomaly) images, hence producing a high number of false negatives.
So based on the results of our GANomaly approach replication on the BDD
dataset, the current state-of-the-art methods for the task of anomaly detection
in such images have not provided any indication that such methods could be
directly used for anomaly detection in highly complex driving scenes. With our
architecture modification and hyperparameters adjustment, we were able to re-
construct the complex images in such high resolution with high quality but at
the discrimination level, the conducted approach with our modification was not
able to discriminate the novel objects, contained in the driving scenes. In the
future, other approaches will be explored in addition to more experiments will be
applied to this approach towards finding the optimal adjustments for detecting
the anomalies in such highly complex driving scenes.
1. Akcay, S., Atapour-Abarghouei, A., Breckon, T.P.: Ganomaly: Semi-supervised
anomaly detection via adversarial training. In: Asian conference on computer vi-
sion. pp. 622–637. Springer (2018)
2. Alexander, A., et al.: Variational autoencoder for end-to-end control of autonomous
driving with novelty detection and training de-biasing. pp. 568–575. IEEE (2018)
3. An, J., Cho, S.: Variational autoencoder based anomaly detection using recon-
struction probability. Special Lecture on IE 2(1) (2015)
4. Aniculaesei, A., Grieser, J., Rausch, A., Rehfeldt, K., Warnecke, T.: Towards a
holistic software systems engineering approach for dependable autonomous sys-
tems. In: Stolle, R., Scholz, S., Broy, M. (eds.) Proceedings of the 1st International
Workshop on Software Engineering for AI in Autonomous Systems. pp. 23–30.
ACM, New York, NY, USA (2018).
5. Aniculaesei, A., Griesner, J., Rausch, A., Rehfeldt, K., Warnecke, T.: Graceful
degradation of decision and control responsibility for autonomous systems based
on dependability cages. In: 5th International Symposium on Future Active Safety
Technology toward Zero. Blacksburg, Virginia, USA (2019)
6. Behere, S., Törngren, M.: A functional architecture for autonomous driving. In:
Kruchten, P., Dajsuren, Y., Altinger, H., Staron, M. (eds.) Proceedings of the First
International Workshop on Automotive Software Architecture. pp. 3–10. ACM,
New York, NY, USA (2015).
7. Behere, S., Törngren, M.: A functional reference architecture for autonomous driv-
ing. Information and Software Technology 73, 136–150 (2016).
8. Japkowicz, N., Myers, C., Gluck, M.: A novelty detection approach to classifi-
cation. In: Proceedings of the 14th International Joint Conference on Artificial
Intelligence - Volume 1. p. 518–523. IJCAI’95, Morgan Kaufmann Publishers Inc.,
San Francisco, CA, USA (1995)
22 N. Habib et al.
9. Krizhevsky, A., Nair, V., Hinton, G.: CIFAR: Learning multiple layers of features
from tiny images,, (retrieved:
10. LeCun, Y., Cortes, Burges, et al.: MNIST dataset of handwritten digits, https:
//, (retrieved: 2023.02.10)
11. Mahadevan, V., LI, W.X., Bhalodia, V., Vasconcelos, N.: Anomaly detection in
crowded scenes. In: Proceedings of IEEE Conference on Computer Vision and
Pattern Recognition. pp. 1975–1981 (2010)
12. Maurer, M., Gerdes, J.C., Lenz, B., Winner, H.: Autonomes Fahren.
Springer Berlin Heidelberg, Berlin, Heidelberg (2015).
978-3-662-45854- 9
13. Mauritz, M., Rausch, A., Schaefer, I.: Dependable adas by combining design time
testing and runtime monitoring. In: FORMS/FORMAT 2014 - 10th Symposium on
Formal Methods for Automation and Safety in Railway and Automotive Systems
14. Raulf, C., et al.: Dynamically configurable vehicle concepts for passenger transport.
In: 13. Wissenschaftsforum Mobilität "Transforming Mobility What Next". Duis-
burg, Germany (2021)
15. Rausch, A., Sedeh, A.M., Zhang, M.: Autoencoder-based semantic novelty detec-
tion: Towards dependable ai-based systems. Applied Sciences 11(21), 9881 (2021).
16. Rushby, J.: Quality measures and assurance for AI software, vol. 18 (1988)
17. Sabokrou, M., Khalooei, M., Fathy, M., Adeli, E.: Adversarially learned one-class
classifier for novelty detection. In: Proceedings of the IEEE conference on computer
vision and pattern recognition. pp. 3379–3388 (2018)
18. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.:
Improved techniques for training gans. Advances in neural information processing
systems 29 (2016)
19. Schlegl, T., Seeböck, P., Waldstein, S.M., Schmidt-Erfurth, U., Langs, G.: Unsu-
pervised anomaly detection with generative adversarial networks to guide marker
discovery. CoRR abs/1703.05921 (2017),
20. Sintini, L., Kunze, L.: Unsupervised and semi-supervised novelty detection using
variational autoencoders in opportunistic science missions. In: BMVC (2020)
21. Wang, H.g., Li, X., Zhang, T.: Generative adversarial network based novelty detec-
tion usingminimized reconstruction error. Frontiers of Information Technology and
Electronic Engineering 19, 116–125 (01 2018).
22. Youtie, J., Porter, A.L., Shapira, P., Woo, S., Huang, Y.: Autonomous systems: A
bibliometric and patent analysis. Tech. rep., Exptertenkommission Forschung und
Innovation (2017)
23. Yu, F., Chen, H., Wang, X., Xian, W., et al.: BDD100K: A diverse driving dataset
for heterogeneous multitask learning (2020),,
(retrieved: 2023.02.10)
24. Zenati, H., Foo, C.S., Lecouat, B., Manek, G., Chandrasekhar, V.R.: Efficient gan-
based anomaly detection. CoRR abs/1802.06222 (2018),
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Many autonomous systems, such as driverless taxis, perform safety-critical functions. Autonomous systems employ artificial intelligence (AI) techniques, specifically for environmental perception. Engineers cannot completely test or formally verify AI-based autonomous systems. The accuracy of AI-based systems depends on the quality of training data. Thus, novelty detection, that is, identifying data that differ in some respect from the data used for training, becomes a safety measure for system development and operation. In this study, we propose a new architecture for autoencoder-based semantic novelty detection with two innovations: architectural guidelines for a semantic autoencoder topology and a semantic error calculation as novelty criteria. We demonstrate that such a semantic novelty detection outperforms autoencoder-based novelty detection approaches known from the literature by minimizing false negatives.
Conference Paper
Full-text available
Autonomous systems are gaining momentum in various application domains, such as autonomous vehicles, autonomous transport robotics and self-adaptation in smart homes. Product liability regulations impose high standards on manufacturers of such systems with respect to dependability (safety, security and privacy). Today's conventional engineering methods are not adequate for providing guarantees with respect to dependability requirements in a cost-efficient manner, e.g. road tests in the automotive industry sum up millions of miles before a system can be considered sufficiently safe. System engineers will no longer be able to test and respectively formally verify autonomous systems during development time in order to guarantee the dependability requirements in advance. In this vision paper, we introduce a new holistic software systems engineering approach for autonomous systems, which integrates development time methods as well as operation time techniques. With this approach, we aim to give the users a transparent view of the confidence level of the autonomous system under use with respect to the dependability requirements. We present already obtained results and point out research goals to be addressed in the future.
Conference Paper
Full-text available
Obtaining models that capture imaging markers relevant for disease progression and treatment monitoring is challenging. Models are typically based on large amounts of data with annotated examples of known markers aiming at automating detection. High annotation effort and the limitation to a vocabulary of known markers limit the power of such approaches. Here, we perform unsupervised learning to identify anomalies in imaging data as candidates for markers. We propose AnoGAN, a deep convolutional generative adversarial network to learn a manifold of normal anatomical variability, accompanying a novel anomaly scoring scheme based on the mapping from image space to a latent space. Applied to new data, the model labels anomalies, and scores image patches indicating their fit into the learned distribution. Results on optical coherence tomography images of the retina demonstrate that the approach correctly identifies anomalous images, such as images containing retinal fluid or hyperreflective foci.
Full-text available
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.
Conference Paper
Full-text available
The increasing degree of the complexity of advanced driver assistance systems (ADAS) requires sound, but ecient methods to ensure the ADAS' dependability. Common vehicle field tests cannot fully guarantee the system's dependability, since they need too many miles to be driven in order to prove that the system is safe. Modern test and simulation environments only verify a part of the real situations the system encounters. In order to ensure the dependability of ADAS, we propose the combination of design time testing and runtime monitoring. The test cases, which are successfully executed at design-time, are transferred to runtime monitoring in order to verify if the system remains within its tested and safe behaviour. Otherwise, no guarantees about the dependability of the vehicle can be given. We will evaluate our approach with an advanced lane changing assistance system.
Generative adversarial network (GAN) is the most exciting machine learning breakthrough in recent years, and it trains the learning model by finding the Nash equilibrium of a two-player zero-sum game. GAN is composed of a generator and a discriminator, both trained with the adversarial learning mechanism. In this paper, we introduce and investigate the use of GAN for novelty detection. In training, GAN learns from ordinary data. Then, using previously unknown data, the generator and the discriminator with the designed decision boundaries can both be used to separate novel patterns from ordinary patterns. The proposed GAN-based novelty detection method demonstrates a competitive performance on the MNIST digit database and the Tennessee Eastman (TE) benchmark process compared with the PCA-based novelty detection methods using Hotelling’s T² and squared prediction error statistics. © 2018, Zhejiang University and Springer-Verlag GmbH Germany, part of Springer Nature.
As the Technology Readiness Levels (TRLs) of self-driving vehicles increase, it is necessary to investigate the Electrical/Electronic(E/E) system architectures for autonomous driving, beyond proof-of-concept prototypes. Relevant patterns and anti-patterns need to be raised into debate and documented. This paper presents the principal components needed in a functional architecture for autonomous driving, along with reasoning for how they should be distributed across the architecture. A functional architecture integrating all the concepts and reasoning is also presented.