Conference PaperPDF Available

Structural Health Monitoring of RC structures using optic fiber strain measurements: a deep learning approach

Conference Paper

Structural Health Monitoring of RC structures using optic fiber strain measurements: a deep learning approach

Abstract

This paper reports the early findings of an ongoing project aimed at developing new methods to upgrade the current maintenance strategies of the civil and transport infrastructure. As part of these new methods, the use of Machine Learning (ML) algorithms is being investigated to constitute the core of a new generation of more accurate and robust structural health monitoring (SHM) systems for concrete structures. Unlike most of the existing SHM systems, relying on the analysis of the natural frequencies of the structure based on data obtained from accelerometers, the present study uses a distributed optic fiber system to monitor the strain distribution along steel reinforcing bars. The preliminary results of the study indicate that a semi-supervised Deep Autoencoder algorithm (DAE) can successfully quantify the damage attributable to transverse cracks in a reinforced concrete beam subjected to three-point loading. Future applications will feature the determination of crack locations, early detection of reinforcement corrosion as well as other types of damage such as splitting cracks or surface spalling.
1
Structural Health Monitoring of RC structures using optic fiber strain
measurements: a deep learning approach
Dimitrios F. KARYPIDIS
M.Sc. Student
Chalmers University of Technology
Gothenburg, Sweden
dimkary@student.chalmers.se
Last year student at the Master Program
Complex Adaptive Systems and
research assistant in machine learning
applications in civil engineering
Mats GRANATH
Associate Professor
University of Gothenburg
Gothenburg, Sweden
mats.granath@physics.gu.se
Background in theoretical condensed
matter physics and complex systems
and researcher in AI based systems.
Carlos G. BERROCAL
PhD
Chalmers University of Technology
Gothenburg, Sweden
carlos.gil@chalmers.se
Postdoctoral researcher in the field of
durability of RC and structural
monitoring currently working on
digitalization of transport infrastructure
Peter SIMONSSON
PhD
Swedish Transport Administration
Luleå, Sweden
peter.simonsson@trafikverket.se
Bridge expert in a client organization
and researcher in construction
engineering and building processes.
Rasmus REMPLING
Associate Professor
Chalmers University of Technology
Gothenburg, Sweden
rasmus.rempling@chalmers.se
Researcher in the field of Structural and
Construction Engineering with interest
in AI based and automated processes.
Contact: carlos.gil@chalmers.se
1 Abstract
This paper reports the early findings of an ongoing project aimed at developing new methods to upgrade the
current maintenance strategies of the civil and transport infrastructure. As part of these new methods, the
use of Machine Learning (ML) algorithms is being investigated to constitute the core of a new generation of
more accurate and robust structural health monitoring (SHM) systems for concrete structures. Unlike most
of the existing SHM systems, relying on the analysis of the natural frequencies of the structure based on data
obtained from accelerometers, the present study uses a distributed optic fiber system to monitor the strain
distribution along steel reinforcing bars. The preliminary results of the study indicate that a semi-supervised
Deep Autoencoder algorithm (DAE) can successfully quantify the damage attributable to transverse cracks in
a reinforced concrete beam subjected to three-point loading. Future applications will feature the
determination of crack locations, early detection of reinforcement corrosion as well as other types of damage
such as splitting cracks or surface spalling.
Keywords: structural health monitoring, machine learning, deep autoencoders, anomaly detection, concrete
structures, distributed optic fiber.
2 Introduction
Recent advancements in digital technology and
communications have rendered possible the use of
real-time monitoring systems, which constantly
receive, process and analyze streams of data
obtained from distributed sensor networks.
Handling and using big streams of data is a tedious
task, which has occupied data scientists for the past
years. One of the most promising tools for handling
2019 IABSE Congress The Evolving Metropolis
September 4-6, 2019, New York City
2
such tasks is Machine Learning (ML). In recent
years, thanks to the increase in computational
capacity of modern computers, a particular sub-
field of ML called Deep Learning (DL) has dominated
the research arena, since it appears the best-known
approach to solve prediction and classification tasks
[1]. DL has successfully been implemented and is
considered to be the state-of-the-art method in a
plethora of applications such as image recognition,
self-driving cars, machine translation, financial time
series prediction etc.
A potential application where DL stands out as a
promising tool is Structural Health Monitoring
(SHM). SHM is the constant monitoring of structural
systems to detect, localize and assess irregularities
and defects. Despite that SHM has been
successfully implemented in sectors like aerospace
and automotive industries, its application in the
transport infrastructure has been hindered by the
singular nature of civil structures. Current SHM
systems still rely on deterministic methods, such as
signal processing and Finite Element Analysis (FEA).
These techniques, although very practical when
used by themselves, suffer from two main issues: (i)
non-robustness to noise and (ii) inflexibility.
Nevertheless, more recent approaches have
applied DL techniques in order to optimize the SHM
procedures, most of them falling under the
category of object detection (computer vision) and
multi-feature classification [2] [3], which sometimes
are non-practical to implement in large scale
projects.
This paper explores the possibility of developing an
anomaly detection system for reinforced concrete
elements using DL. The system has been tested on
RC beams subjected to 3-point bending where
strains are measured via a distributed optical fiber
attached to the reinforcement. Subsequently, the
strain profiles were fed to a Deep Autoencoder
Network (DAE) with different configurations. The
results show that, after proper training, the
network is able to detect the anomalous states of
the beam by measuring the reconstruction error of
the preceding observations.
3 Methodology
3.1 Deep Autoencoders
The goal of the current application is to build an
anomaly detection system. To that end, one of the
most commonly used neural network architectures
is the Deep Autoencoder [1](DEA) (Fig. 1). DEA’s fall
under the category of the semi-supervised ML
algorithms, which manipulate neural network
architectures to solve the task of representation
learning. Its purpose is to reduce the dimensions
and keep the essential structural information of the
data, similar to Principal Component Analysis. This
technique works well when the data is
multidimensional, and some of the features are
either correlated or non-uniformly significant.
Figure 1. Example of a Deep Autoencoder Network
architecture.
The unique property of DEA’s compared to
conventional artificial neural networks is that the
network architecture consists of two parts: the
encoder and the decoder. The encoder consists of
sublayers of decreasing node number, “squeezing”
the initial input
!
into the smallest sublayer, called
bottleneck or latent-space layer. The number of
neurons in the bottleneck layer correspond to the
number of features we want to compress the data
into. The decoder is the part of the network from
the bottleneck until the output. The topology of the
decoder sublayers is a mirror of the encoder. The
purpose of Autoencoders is to map the input data
to itself, i.e. minimize the function:
"#$!%!&'( #$!%)$*$!'''
(1)
where
"!%!
& are the pair of original and reconstructed
inputs,
)
and
*
are the decoding and encoding
functions respectively and
"#
$
!%!
&' is the loss
𝒙 𝒙
"
𝒛
Encoder Decoder
Latent
Space
2019 IABSE Congress The Evolving Metropolis
September 4-6, 2019, New York City
3
function which must be optimized. In our case we
use the typical Mean Squared Error (MSE) function
shown below:
#$!%!&'+",! - !&,./0123.
4
356
(2)
where the second addition term is the
78
regularizer, which inhibits model overfitting
without increasing bias significantly.
The bottleneck architecture inhibits the non-useful
mapping
)
$
*$!'
'
+ "!
. The efficiency of the
network is measured by the reconstruction error
produced by feeding unseen data of the dataset.
3.2 Anomaly Detection
Anomaly detection (also outlier detection) refers to
a family of techniques that systematically monitor a
system in order to identify rare items, events or
observations which raise suspicions by differing
significantly from the majority of the data [4]. These
unusual observations quite often indicate either
that our system has reached an anomalous
condition or that the newest datapoint is defective,
both of which should be examined with care. The
use of DAE’s has become a staple for anomaly
detection tasks. With proper implementation, it is a
robust and reliable model. The algorithm is the
following:
Step-1: Examine the data and use an appropriate
preprocessing scheme.
Step-2: Decide which data fall under the normal
category.
Step-3: Train the autoencoder with the normal
data only, until convergence.
Step-4: Feed all the training data into the model
(training+validation), in order to obtain the
training reconstruction error.
Step-5: Construct a threshold error according to
some statistic derived from the training
reconstruction error (a common choice is
9:;< +
"=>?@ABCDEFG"#
$
!%!
&' [5].
Step-6: Each sample that has a greater
reconstruction error than the threshold, is
characterized as an anomaly. With the total
reconstruction error profile, we can have a
sanity check on the efficiency of our network.
Figure 2. Close up view of the optic fiber sensor
attached to a steel reinforcing bar used in the
experimental tests.
4 Current implementation
4.1 Main concept
The goal of the current implementation is to
examine the possibility of using DAE as a monitoring
system in concrete structures. More specifically, the
aim is to monitor structures using only
measurements of reinforcement strains obtained
from a distributed optic fiber sensor, see Fig. 2.
The idea is to model the structure using Finite
Element Analysis (FEA), load it until failure with
different load cases, gather the features needed for
the damage estimation and train the model with the
normal states of each loading case.
This approach, while reasonable, does not account
for the unavoidable error existing in all data
acquired from integrated sensors (imperfect
application, device noise etc.) nor the inherent
heterogeneity of concrete. Since concrete is
commonly modelled as a homogeneous material in
FEA, the existing randomness of the material is not
well captured by conventional FE, thereby
rendering them divergent from the real structure.
To tackle this issue, the initial states of the physical
element were included into the training data. This
step, which acts as a “calibration”, can be easily
applied to large scale projects and makes the neural
network generalize even better.
Figure 3. Tested beam after failure.
2019 IABSE Congress The Evolving Metropolis
September 4-6, 2019, New York City
4
Figure 4. On the left, the strain profile of the beam FEA. On the right, the strain profile of one of the tested
beams. The red dashed line shows until which state the networks were tested on (see 4.1).
4.2 Set up
The experimental set up consisted of 6 concrete
beams with dimensions 901510cm, reinforced
with two 10mm rebar of B500B steel placed with
a concrete cover of 25mm. The concrete had a cube
compressive strength of 60 MPa and a tensile
splitting strength of 3.5 MPa, both measured at 28
days. For each beam, only one of the longitudinal
bars was outfitted with an optic fiber sensor. The
signal frequency was 1.25 Hz and spatial resolution
was 0.65 mm.
The beams were tested to failure under three-point
loading using a displacement-control setup at
displacement rate of 1mm/min. Two of the beams
were loaded monotonically and four were
subjected to cyclic loading. Stirrups were not
provided in order to promote shear failure [6],
which occurred for all of the beams tested, see
Fig.3. Moreover, the monotonic tests were
numerically simulated using the commercial FEA
software DIANA.
4.3 Training procedure
The training data consisted of the strain states up to
until 50% of the total capacity for the results
obtained from the FEA, whereas only about the first
15% of the strain states obtained from each
experiment were used for training (Fig. 4).
4.4 Preprocessing
The most crucial part of the implementation is the
data preprocessing. Feature-wise preprocessing
such as z-score standardization and min-max
normalization was discarded. The reason is that for
the current application all features
HI%J + K%L %M
have the same physical meaning (strain at some
rebar position), thus having the same underlying
range while the total range of the values will not be
known beforehand in a real application.
Consequently, the preprocessing scheme that was
tested was a row zero mean centering approach
where each feature for each datapoint is
transformed as follows:
!4;N
3I + !3I -O
P3%Q + K% L%RS J + K%L%M
(3)
where
R
is the number of observations and
O
P
3
is the
mean of all the features in the current observation.
In common ML applications, this type of
preprocessing is discouraged, since transforming
different units with the same device removes a
great part of relational information. Nevertheless,
for the reasons discussed above, it is an ideal
candidate for the current application.
4.5 Damage classification
In bridge condition assessment (BCA) a common
practice to assess the level of existing damage in the
structure is by creating different criteria or
thresholds. Accordingly, three damage thresholds
were created by examining the distribution of the
strains across the rebar from the analysis:
TU%T6%T.
2019 IABSE Congress The Evolving Metropolis
September 4-6, 2019, New York City
5
which are the small, significant and hazardous
damage thresholds respectively. Thus, we have:
TU+RV!"$WXX'YK / K
8"Z["\
(4)
T6+ TU]K/ 06^Z[_
(5)
T.+ TU]K/0.^Z[_
(6)
where:
Z
[
+
`
a?b
$
c
&
.
'
-K
(7)
c& + c$def"$WXX''
(8)
with
c
being the standard deviation, and
WXX
is a
vector with all the training reconstruction errors.
Intuitively,
Z
[ is a custom dispersion, fitting the
current application. It must be noted that the above
rules were constructed empirically, after trial and
error, taking into consideration the dispersity of the
reconstruction error resulting from measurements
of different noise levels. The coefficient
06%0.
are
multipliers that dictate the sensitivity of the system.
In our application,
06+ g
and
0.+ h
. In fact, one
could construct arbitrary levels of damage
03
,
depending on the significance of the structure. It
must be noted that the proposed thresholds should
be used with the training scheme that involves both
FEA and the initial states data.
Figure 5. Losses and reconstruction errors for beams 2 and 3, subjected to monotonic and cyclic loading,
respectively.
5 Results and Discussion
Fig. 5 presents the results of the DAE for two of the
tested beams, one with monotonic and one with
cyclic loading, in terms of reconstruction error and
train and validation loss. All the cases studied were
tested with various random seeds, in order to verify
the robustness of the results. For all the tested
beams, the model was able to easily fit the data,
which was a more cumbersome task before the
2019 IABSE Congress The Evolving Metropolis
September 4-6, 2019, New York City
6
preprocessing scheme described in 4.4. Moreover,
in all cases, the damage thresholds classify the
damage level successfully, though less efficiently in
the heavily noisy beams. It must be noted that the
network could capture the states of all the beams,
regardless of the type of loading applied.
The Hazardous damage was generally about 70% to
90% of the beam capacity. However, in all cases, the
significant damage threshold can efficiently trigger
an inspection, thereby avoiding the total collapse of
the concrete element.
6 Conclusions
In this research project we have presented the
possibility of constructing a DML anomaly detection
model that monitors the damage state of a concrete
beam, using its strain profile only. The current
approach was implemented successfully and could
possibly be extended in a plethora of structures. As
a future research, more models could be
investigated, as well as more complex structural
elements with different load patterns. Verifying the
effectiveness of the proposed method in various
structures is an imperative need before a full-scale
application, which could possible revolutionize the
field of SHM in bridges.
7 Acknowledgements
We thank Trafikverket, NCC, WSP and Microsoft for
their valuable contribution.
8 References
[1]
I. Goodfellow, Y. Bengio and A. Courville, Deep
Learning, MIT Press, 2016.
[2]
Y. Bao, Z. Tang, H. Li and Y. Zhang, "Computer
vision and deep learningbased data anomaly
detection method for structural health
monitoring," Structural Health Monitoring, vol.
18, no. 2, p. 401421, 2019.
[3]
Y. J. Cha, W. Choi and O. Büyüköztürk, "Deep
learning-based crack damage detection using
convolutional neural networks.," Computer-
Aided Civiland Infrastructure Engineering, vol.
32, no. 5, pp. 361-378, 2017.
[4]
A. Zimek and E. Schubert, "Outlier Detection,"
Encyclopedia of Database Systems, 2017.
[5]
L. Beggel, P. Michael and B. Bernd, "Robust
Anomaly Detection in Images using Adversarial
Autoencoders.," arXiv, 2019.
[6]
J. G. MacGregor and J. K. Wight. "Reinforced
Concrete: Mechanics and Design". 6th Edition
Harlow: Pearson Education, 2011.
... In reality, a single-to-one match occurs, as a sensor mote will reflect and execute neural network brain cell or node calculations, whereas wireless communications between the neurons/signals are similar to the cells in the brain. A WSN transceiver is a simple computer with an embedded microcontroller [21], a radio transceiver as a linking system for wireless communications as well as a range of sensors shown in figure 2. Calculation capacity associated with neuron dynamics is enough to execute several nodes in real time for most purposes. Therefore a WSN with sensors connected to each beam, it can measure damage in beam in bridges using parameters. ...
Article
Full-text available
In the last few decades, structural health monitoring(SHM) has been a major concern, and in the study of data collected from monitoring devices embedded in the networks, it gives engineers deep understanding about the failure of civilian infrastructure. Commonly named structural health monitoring is the method of applying a damage recognition for climate, civil and mechanical engineering facilities With both the growth of technologies sensor networks (SNs) a vast number of sensors is fitted with architectural or mechanical systems to receive real-time information on their wellbeing, suggesting that data handling in WSN-based SHM is of significant importance. Increased SHM innovation has been empowered with the development of intelligent sensors and real-time connectivity systems over wireless sensor networks (WSN). Recently, predictive time series simulations for structural damage detection due to the function coefficients resistance and unresolved structural damage mistakes have been commonly used. Machine Learning algorithms (ML) are progressively used to predict damage. The main approach is the tool used to estimate the degree of damaged brides via the sensors. In the second step artificial neural network (ANN) method enabling the detection level objectively describes the generalization error of each bride.
... More specifically, SHM research has significantly benefited from utilizing deep learning in processing information to assess structural conditions. This input information can be from various sources such as vibration (Abdeljaber et al. 2018, Rafiei & Adeli 2018, Azimi & Pekcan 2019, Khodabandehlou et al. 2019, Sajedi & Liang 2020, acoustic emissions (Ebrahimkhanlou et al. 2019), and strain measurements (Karypidis et al. 2019). While being good indicators of structural damage, acquiring these types of records commonly requires special instrumentations. ...
Preprint
Computer vision leveraging deep learning has achieved significant success in the last decade. Despite the promising performance of the existing deep models in the recent literature, the extent of models' reliability remains unknown. Structural health monitoring (SHM) is a crucial task for the safety and sustainability of structures, and thus prediction mistakes can have fatal outcomes. This paper proposes Bayesian inference for deep vision SHM models where uncertainty can be quantified using the Monte Carlo dropout sampling. Three independent case studies for cracks, local damage identification, and bridge component detection are investigated using Bayesian inference. Aside from better prediction results, mean class softmax variance and entropy, the two uncertainty metrics, are shown to have good correlations with misclassifications. While the uncertainty metrics can be used to trigger human intervention and potentially improve prediction results, interpretation of uncertainty masks can be challenging. Therefore, surrogate models are introduced to take the uncertainty as input such that the performance can be further boosted. The proposed methodology in this paper can be applied to future deep vision SHM frameworks to incorporate model uncertainty in the inspection processes.
Conference Paper
p>In recent years, significant worldwide research has been conducted regarding the performance assessment of bridges and the concept of performance indicator has been introduced However, there are still significant discrepancies in how these indicators are obtained and used. Simultaneously, it is desirable to achieve processes and methods that are direct, i.e. that measured values are directly compared with projected values over time. This project concerns methods for verification of technical performance requirements. The feasibility study brought together interdisciplinary researchers, consultants, and entrepreneurs to gather knowledge, anchor the research agenda, and implement performance requirements. The project concludes that there is a need for a “Holistic multi-parameter verification/validation system” that relies on the knowledge gained in structural health monitoring research.</p
Chapter
Modal-based structural health monitoring (SHM) detects damage and degradation phenomena from the variations of the modal parameters over time. However, the modal parameter estimates are also influenced by environmental and operational variables (EOVs) whose effects have to be compensated. Modeling the influence of EOVs on modal parameters is very challenging, so black-box models, such as regression models, are often adopted as an alternative. However, in many applications, the set of measured EOVs is incomplete or the factors influencing the estimates cannot be identified or measured. In these conditions, output-only techniques for compensation of environmental and operational effects are an attractive alternative. Different linear as well as nonlinear methods for the compensation of the environmental and operational influence on modal parameters in the context of modal-based SHM are reviewed in the present paper. Real datasets collected from vibration-based monitoring systems are analyzed, and the results are presented and discussed to illustrate the applicative perspectives and possible drawbacks of the selected methods.
Chapter
Full-text available
In recent years, structural health monitoring (SHM) applications have significantly been enhanced, driven by advancements in artificial intelligence (AI) and machine learning (ML), a subcategory of AI. Although ML algorithms allow detecting patterns and features in sensor data that would otherwise remain undetected, the generally opaque inner processes and black-box character of ML algorithms are limiting the application of ML to SHM. Incomprehensible decision-making processes often result in doubts and mistrust in ML algorithms, expressed by engineers and stakeholders. In an attempt to increase trust in ML algorithms, explainable artificial intelligence (XAI) aims to provide explanations of decisions made by black-box ML algorithms. However, there is a lack of XAI approaches that meet all requirements of SHM applications. This chapter provides a review of ML and XAI approaches relevant to SHM and proposes a conceptual XAI framework pertinent to SHM applications. First, ML algorithms relevant to SHM are categorized. Next, XAI approaches, such as transparent models and model-specific explanations, are presented and categorized to identify XAI approaches appropriate for being implemented in SHM applications. Finally, based on the catego-rization of ML algorithms and the presentation of XAI approaches, the conceptual XAI framework is introduced. It is expected that the proposed conceptual XAI framework will provide a basis for improving ML acceptance and transparency and therefore increase trust in ML algorithms implemented in SHM applications.
Article
Full-text available
Computer vision leveraging deep learning has achieved significant success in the last decade. Despite the promising performance of the existing deep vision inspection models, the extent of models’ reliability remains unknown. Structural health monitoring (SHM) is a crucial task for the safety and sustainability of structures, and thus, prediction mistakes can have fatal outcomes. In this paper, we use Bayesian inference for deep vision SHM models where uncertainty can be quantified using the Monte Carlo dropout sampling. Three independent case studies for cracks, local damage identification, and bridge component detection are investigated using Bayesian inference. Aside from better prediction results, the two uncertainty metrics, variations in softmax probability and entropy, are shown to have good correlations with misclassifications. However, modifying the decision or triggering human intervention can be challenging based on raw uncertainty outputs. Therefore, the concept of surrogate models is proposed to develop the models for uncertainty-assisted segmentation and prediction quality tagging. The former refines the segmentation mask and the latter is used to trigger human interventions. The proposed framework can be applied to future deep vision SHM frameworks to incorporate model uncertainty in the inspection processes.
Article
Full-text available
The widespread application of sophisticated structural health monitoring systems in civil infrastructures produces a large volume of data. As a result, the analysis and mining of structural health monitoring data have become hot research topics in the field of civil engineering. However, the harsh environment of civil structures causes the data measured by structural health monitoring systems to be contaminated by multiple anomalies, which seriously affect the data analysis results. This is one of the main barriers to automatic real-time warning, because it is difficult to distinguish the anomalies caused by structural damage from those related to incorrect data. Existing methods for data cleansing mainly focus on noise filtering, whereas the detection of incorrect data requires expertise and is very time-consuming. Inspired by the real-world manual inspection process, this article proposes a computer vision and deep learning–based data anomaly detection method. In particular, the framework of the proposed method includes two steps: data conversion by data visualization, and the construction and training of deep neural networks for anomaly classification. This process imitates human biological vision and logical thinking. In the data visualization step, the time series signals are transformed into image vectors that are plotted piecewise in grayscale images. In the second step, a training dataset consisting of randomly selected and manually labeled image vectors is input into a deep neural network or a cluster of deep neural networks, which are trained via techniques termed stacked autoencoders and greedy layer-wise training. The trained deep neural networks can be used to detect potential anomalies in large amounts of unchecked structural health monitoring data. To illustrate the training procedure and validate the performance of the proposed method, acceleration data from the structural health monitoring system of a real long-span bridge in China are employed. The results show that the multi-pattern anomalies of the data can be automatically detected with high accuracy.
Article
Full-text available
A number of image processing techniques (IPTs) have been implemented for detecting civil infrastructure defects to partially replace human-conducted on-site inspections. These IPTs are primarily used to manipulate images to extract defect features, such as cracks in concrete and steel surfaces. However, the extensively varying real-world situations (e.g., lighting and shadow changes) can lead to challenges to the wide adoption of IPTs. To overcome these challenges, this article proposes a vision-based method using a deep architecture of con-volutional neural networks (CNNs) for detecting concrete cracks without calculating the defect features. As CNNs are capable of learning image features automatically , the proposed method works without the conjugation of IPTs for extracting features. The designed CNN is trained on 40 K images of 256 × 256 pixel resolutions and, consequently, records with about 98% accuracy. The trained CNN is combined with a sliding window technique to scan any image size larger than 256 × 256 pixel resolutions. The robustness and adaptability of the proposed approach are tested on 55 images of 5,888 × 3,584 pixel resolutions taken from a different structure which is not used for training and validation processes under various conditions (e.g., strong light spot, shadows , and very thin cracks). Comparative studies are conducted to examine the performance of the proposed CNN using traditional Canny and Sobel edge detection methods. The results show that the proposed method shows
Chapter
Reliably detecting anomalies in a given set of images is a task of high practical relevance for visual quality inspection, surveillance, or medical image analysis. Autoencoder neural networks learn to reconstruct normal images, and hence can classify those images as anomalies, where the reconstruction error exceeds some threshold. Here we analyze a fundamental problem of this approach when the training set is contaminated with a small fraction of outliers. We find that continued training of autoencoders inevitably reduces the reconstruction error of outliers, and hence degrades the anomaly detection performance. In order to counteract this effect, an adversarial autoencoder architecture is adapted, which imposes a prior distribution on the latent representation, typically placing anomalies into low likelihood-regions. Utilizing the likelihood model, potential anomalies can be identified and rejected already during training, which results in an anomaly detector that is significantly more robust to the presence of outliers during training.
Article
Contenido: Introducción; El proceso de diseño; Materiales; Flexión: conceptos básicos, vigas rectangulares; Flexión: Vigas T, vigas con compresión reforzada y casos especiales; Esfuerzo cortante en vigas; Torsión; Desarrollo, anclaje y empalme del refuerzo; Utilidad; Vigas continuas y loza; Columnas: carga y flexión axial combinada; Columnas delgadas; Lozas dobles: comportamiento, análisis y método de diseño directo; Método equivalente del marco; Lozas dobles: elásticas, línea de producción y método de análisis de la ranura; Monturas (footings); Esfuerzo cortante y transferencia horizontal de fricción y vigas compuestas de concreto; Regiones discontinuas y modelos puntual y atar (strut-and-tie); Paredes y esfuerzo cortante en paredes; Diseño para resistencia contra sismos.
Encyclopedia of Database Systems
  • A Zimek
  • E Schubert
Reinforced Concrete: Mechanics and Design
  • J G Macgregor
  • J K Wight