ArticlePublisher preview available

Compressed sensing techniques for detecting damage in structures

SAGE Publications Inc
Structural Health Monitoring
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

One of the principal challenges facing the structural health monitoring community is taking large, heterogeneous sets of data collected from sensors, and extracting information that allows the estimation of the damage condition of a structure. Another important challenge is to collect relevant data from a structure in a manner that is cost-effective, and respects the size, weight, cost, energy consumption and bandwidth limitations placed on the system. In this work, we established the suitability of compressed sensing to address both challenges. A digital version of a compressed sensor is implemented on-board a microcontroller similar to those used in embedded SHM sensor nodes. The sensor node is tested in a surrogate SHM application using acceleration measurements. Currently, the prototype compressed sensor is capable of collecting compressed coefficients from measurements and sending them to an off-board processor for signal reconstruction using ℓ1 norm minimization. A compressed version of the matched filter known as the smashed filter has also been implemented on-board the sensor node, and its suitability for detecting structural damage will be discussed.
This content is subject to copyright.
Article
Structural Health Monitoring
12(4) 325–338
ÓThe Author(s) 2013
Reprints and permissions:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/1475921713486164
shm.sagepub.com
Compressed sensing techniques for
detecting damage in structures
David Mascaren
˜as
1
, Alessandro Cattaneo
2
,
James Theiler
3
and Charles Farrar
1
Abstract
One of the principal challenges facing the structural health monitoring community is taking large, heterogeneous sets of
data collected from sensors, and extracting information that allows the estimation of the damage condition of a struc-
ture. Another important challenge is to collect relevant data from a structure in a manner that is cost-effective, and
respects the size, weight, cost, energy consumption and bandwidth limitations placed on the system. In this work, we
established the suitability of compressed sensing to address both challenges. A digital version of a compressed sensor is
implemented on-board a microcontroller similar to those used in embedded SHM sensor nodes. The sensor node is
tested in a surrogate SHM application using acceleration measurements. Currently, the prototype compressed sensor is
capable of collecting compressed coefficients from measurements and sending them to an off-board processor for signal
reconstruction using
1
norm minimization. A compressed version of the matched filter known as the smashed filter has
also been implemented on-board the sensor node, and its suitability for detecting structural damage will be discussed.
Keywords
Compressed sensing, Structural Health Monitoring, sparse modeling, low-power sensing
Introduction
Data for structural health monitoring (SHM) applica-
tions are generally collected using a distributed sensor
network. Distributed sensor networks made up of
nodes with hard-wired data and communication lines
generally have high installation costs, particularly in the
retrofit mode. Lynch et al.
1
reported that hard-wire
installation could consume 75% of the total testing time
and installation costs could consume 25% of the total
system costs. The goal is to transition to low-power,
wireless sensor networks featuring minimal installation
costs.
2
Two of the major problems with these types of
sensor networks are the minimization of energy and
communication bandwidth. Compressed sensing tech-
niques hold promise to help address both of these
demands. By collecting compressed coefficients, the sig-
nal of interest can be represented using a fraction of the
measurements required by traditional Nyquist sam-
pling. The result is reduced energy consumption for
data collection, storage and transmission.
3,4
In addi-
tion, the bandwidth required to transmit the sampled
signal is also significantly reduced. The focus of this
work is to evaluate the applicability of compressed
sensing techniques to expand the capabilities of wireless
sensor networks for SHM applications. First, a novel,
compressive sensing–based framework for implement-
ing low-power SHM wireless sensor networks will be
proposed. Next, three compressed sensing techniques
are characterized in order to demonstrate their applic-
ability to the proposed wireless sensor network frame-
work for SHM applications. The compressed sensing
techniques that are investigated in this research are as
follows. First,
1
norm minimization-based techniques
will be used to reconstruct experimentally measured
and compressed acceleration signals from a surrogate
three-story structure excited at a single frequency.
These reconstructions will make use of the Fourier
basis. Next,
1
norm minimization-based techniques will
1
Engineering Institute, Los Alamos National Laboratory, Los Alamos,
NM, USA
2
Department of Mechanics, Politecnico di Milano, Milan, Italy
3
ISR-3, Los Alamos National Laboratory, Los Alamos, NM, USA
Corresponding author:
David Mascaren
˜as, Engineering Institute, Los Alamos National
Laboratory, P.O. Box 1663, MS T001, Los Alamos, NM 87545, USA.
Email: dmascarenas@lanl.gov
... Their research results showed that the sparsity of structural acceleration data is an important factor affecting the accuracy of data reconstruction. Mascarenas et al. [41] studied the application of compressed sensing technology to structural damage identification. According to their research, compression sensors are first used to collect compression coefficients from measured signals, which are then transmitted to off board processors to reconstruct the data by minimizing the L1 norm. ...
Chapter
Big data is data set whose volume greatly exceeds the capabilities of the conventional database softwares in terms of data acquisition, storage, management and analysis. Big data is usually generated in the process of bridge operation and maintenance. Therefore, the storage, processing and utilization of big data is an important research topic in the field of intelligent bridge operation and maintenance. This chapter introduces the basic concept of big data as well as concepts of the common big data computing methods, and the information physical system used to associate the physical space with the information space, with emphasis on the application of big data computing methods in civil engineering, especially in bridge engineering.
... Other examples of SHM applications using CS can be seen in Mascareñas et al Mascareñas et al. (2013) or Jana and Nagarajaiah Jana and Nagarajaiah (2022). In these works signals are reconstructed in the time domain using a generic Fourier or wavelet basis, or alternatively using the underlying physics to form an analytical reduced order model basis. ...
Preprint
Full-text available
High dimensional systems, such as large civil infrastructure, exhibit fundamental patterns in space and time which can be exploited for efficient data acquisition, reconstruction, identification and damage detection. This study numerically investigates the applicability of compressed sensing (CS) theory to reconstruct the static displacement field of a multi-storey building using a small number of displacement samples. A full-scale Finite Element (FE) model of the building, developed using Opensees software, is used to capture its static displacement field and vibratory mode shapes, which serve as a tailored physics-guided basis. A sample of displacement data was then randomly selected, aiming to reconstruct the entire displacement field. The results demonstrate that achieving a reliable full-scale reconstruction is feasible with only approximately one percent of the total degrees of freedom in the original model. This highlights the effectiveness of the CS paradigm in accurately reconstructing various measurement fields within buildings, emphasizing its potential to enhance the efficiency of information extraction from spatially distributed sensor networks.
... Generally, these methods are related to optimization-based methods [24,25], such as sparse optimization methods [26][27][28][29][30][31], waveform matching and dictionary learning-based methods [32][33][34][35], and multi-sensor fusion-based methods [36]. Although the need for accurate reconstruction of industrial vibration signals can be effectively addressed by the signal reconstruction methods described above, the disadvantages of the traditional method, such as having many parameters, slow convergence speed, and being prone to significant error at high compression ratios [37][38][39], are still difficult to avoid, resulting in the theory not being applied to the online monitoring demand of actual production equipment. ...
Article
Full-text available
To solve the problem that noise seriously affects the online monitoring of parts signals of outdoor machinery, this paper proposes a signal reconstruction method integrating deep neural network and compression sensing, called ADMM-1DNet, and gives a detailed online vibration signal monitoring scheme. The basic approach of the ADMM-1DNet network is to map the update steps of the classical Alternating Direction Method of Multipliers (ADMM) into the deep network architecture with a fixed number of layers, and each phase corresponds to an iteration in the traditional ADMM. At the same time, what differs from other unfolded networks is that ADMM-1DNet learns a redundant analysis operator, which can reduce the impact of outdoor high noise on reconstruction error by improving the signal sparse level. The implementation scheme includes the field operation of mechanical equipment and the operation of the data center. The empirical network trained by the local data center conducts an online reconstruction of the received outdoor vibration signal data. Experiments are conducted on two open-source bearing datasets, which verify that the proposed method outperforms the baseline method in terms of reconstruction accuracy and feature preservation, and the proposed implementation scheme can be adapted to the needs of different types of vibration signal reconstruction tasks.
Article
In the realm of structural health monitoring (SHM) of bridge structures, the accurate reconstruction of girder‐end displacement (GED) is crucial for identifying potential structural damage and ensuring the monitoring system’s reliability. A novel fine‐grained spatial (FGS) attention mechanism, combined with efficient channel attention (ECA), has been proposed to effectively utilize multisource monitoring data. This hybrid attention mechanism has been integrated into an arithmetic optimization algorithm–bidirectional long short‐term memory (AOA–BiLSTM) framework for reconstructing GED using non‐GED data, including deflection, temperature, strain, and traffic data. Data are organized into a two‐dimensional array based on sensor types and spatial locations to capture interchannel and intrachannel correlations. ECA captures local correlations among different sensor types, while the proposed FGS enhances model interpretability by focusing on local dependencies within each sensor type. Huber loss is employed for robust performance, and AOA techniques are used for efficient hyperparameter optimization. Validation with real‐world data from a cable‐stayed bridge demonstrates the necessity and efficacy of considering multidimensional information correlations in response reconstruction for SHM applications. This work lays a theoretical foundation for improving safety assessments, anomaly detection, data recovery, and virtual sensing in bridge structures.
Article
Structural health monitoring (SHM) data have a large volume, increasing the cost of data storage and transmission and the difficulties of structural parameter identification. The compressed sensing (CS) theory provides a signal acquisition and analysis strategy. Signal reconstruction using limited measurements and CS has attracted significant interest. However, the dynamic responses obtained from civil engineering structures contain noise, resulting in sparse samples and reducing the signal reconstruction accuracy. Therefore, we propose an optimization algorithm for the measurement matrix integrating the Karhunen-Loeve transform (KLT) and approximate QR decomposition (KLT-QR) to improve the accuracy of dynamic response reconstruction of SHM data. The KLT reduces the correlation between the measurement matrix and the sparse basis. The approximate QR decomposition is used to improve the independence between the column vectors of the measurement matrix, optimizing the measurement matrix. The experimental results for a laboratory steel beam indicate that the proposed KLT-QR algorithm outperforms three other algorithms regarding the accuracy of dynamic response reconstruction (acceleration, displacement, and strain), especially at high compression ratios. The acceleration responses from the Ji’an Bridge are utilized to verify the advantages of the proposed algorithm. The results demonstrate that the KLT-QR algorithm has the highest accuracy of reconstructing the vibration signals and yields better Fourier spectra than the conventional Gaussian measurement matrix.
Article
Full-text available
The transmission system is a key component to ensure the stable operation of high-speed trains. Thus, it is significant to monitor its condition to ensure the operation safety. Nowadays sparse representation is widely used in fault diagnosis. However, as the number of sensors is increasing, the existing method destroys the internal structure of multi-channel signals and cannot effectively deal with the fault diagnosis of multi-channel signals in parallel. Therefore, this article extends the existing sparse representation method to tensor space to extract the coupling information between channels and realize the fault diagnosis of multi-channel. First, a tensor sparse representation model is proposed to achieve data-level multi-channel signal fusion and complete inter-channel fault feature extraction. Then, a multimodal dictionary learning algorithm is proposed to adaptively design the data-driven dictionary to achieve data-driven feature extraction. Finally, a tensor sparse representation classification method is proposed to achieve the purpose of intelligent diagnosis. Fault experiments verify the effectiveness and superiority of the method.
Chapter
Bridges are a vital component of the public transit system. However, as such infrastructure systems age, they sustain various sorts of damage, decreasing their performance and service life dramatically. In this setting, effective and efficient bridge health monitoring is critical to lowering maintenance costs and extending the service life of existing bridges. Traditional monitoring techniques require sensors being installed on bridges, which is costly and time-consuming. This paper presents a novel crowdsensing-based methodology to monitor the health condition of bridges through a number of smartphones in moving vehicles, i.e., indirect monitoring. By collecting continuous data from the smartphone users and extracting features from the data while they cross the bridge, the damage can be identified through quantifying the difference of the distributions of the features. The continuous data collection and transmission with high sampling frequency pose a particular challenge to the participation of the public, because this could drain the smartphone battery and data plan quickly. In this paper, compressed sensing is introduced into this crowdsensing framework. The compressed sensing can recover the signal from much fewer samples than the ones required by Nyquist–Shannon sampling theorem through random sampling, which leads to more efficient data collection and transmission. Numerical analysis is conducted to validate the effectiveness of compressed sensing on indirect bridge condition monitoring.KeywordsCondition monitoringCompressed sensingBridges
Article
Full-text available
Wireless sensor networks (WSNs) for structural health monitoring (SHM) applications can provide the data collection necessary for rapid structural assessment after an event such as a natural disaster puts the reliability of civil infrastructure in question. Technical challenges affecting deployment of such a network include ensuring power is maintained at the sensor nodes, reducing installation and maintenance costs, and automating the collection and analysis of data provided by a wireless sensor network. In this work, a new "mobile host" WSN paradigm is presented. This architecture utilizes nodes that are deployed without resident power. The associated sensors operate on a mechanical memory principle. A mobile host, such as a robot or unmanned aerial vehicle, is used on an as-needed basis to charge the node by wireless power delivery and subsequently retrieve the data by wireless interrogation. The mobile host may be guided in turn to any deployed node that requires interrogation. The contribution of this work is the first field demonstration of a mobile host wireless sensor network. The sensor node, referred to as THINNER, capable of collecting data wirelessly in the absence of electrical power was developed. A peak displacement sensor capable of interfacing with the THINNER sensor node was also designed and tested. A wireless energy delivery package capable of being carried by an airborne mobile host was developed. Finally, the system engineering required to implement the overall sensor network was carried out. The field demonstration took place on an out-of-service, full-scale bridge near Truth-or-Consequences, NM.
Conference Paper
The theory of compressive sensing (CS) enables the reconstruction of a sparse or compressible image or signal from a small set of linear, non-adaptive (even random) projections. However, in many applications, including object and target recognition, we are ultimately interested in making a decision about an image rather than computing a reconstruction. We propose here a framework for compressive classification that operates directly on the compressive measurements without first reconstructing the image. We dub the resulting dimensionally reduced matched filter the smashed filter. The first part of the theory maps traditional maximum likelihood hypothesis testing into the compressive domain; we find that the number of measurements required for a given classification performance level does not depend on the sparsity or compressibility of the images but only on the noise level. The second part of the theory applies the generalized maximum likelihood method to deal with unknown transformations such as the translation, scale, or viewing angle of a target object. We exploit the fact the set of transformed images forms a low-dimensional, nonlinear manifold in the high-dimensional image space. We find that the number of measurements required for a given classification performance level grows linearly in the dimensionality of the manifold but only logarithmically in the number of pixels/samples and image classes. Using both simulations and measurements from a new single-pixel compressive camera, we demonstrate the effectiveness of the smashed filter for target classification using very few measurements.
Article
The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries-stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l(1) norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.
Article
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Article
In structural health monitoring (SHM) of civil structures, data compression is often needed to reduce the cost of data transfer and storage, because of the large volumes of sensor data generated from the monitoring system. The traditional framework for data compression is to first sample the full signal and, then to compress it. Recently, a new data compression method named compressive sampling (CS) that can acquire the data directly in compressed form by using special sensors has been presented. In this article, the potential of CS for data compression of vibration data is investigated using simulation of the CS sensor algorithm. For reconstruction of the signal, both wavelet and Fourier orthogonal bases are examined. The acceleration data collected from the SHM system of Shandong Binzhou Yellow River Highway Bridge is used to analyze the data compression ability of CS. For comparison, both the wavelet-based and Huffman coding methods are employed to compress the data. The results show that the values of compression ratios achieved using CS are not high, because the vibration data used in SHM of civil structures are not naturally sparse in the chosen bases.
Article
An Abstract of a Thesis Submitted to the Graduate Faculty of North Carolina State University in Partial Fulflllment of the Requirements for the Degree of MASTER OF APPLIED MATHEMATICS The original of the complete thesis is on flle in the Department of Mathematics Examining Committee:
Article
The real-world structures are subjected to operational and environmental condition changes that impose difficulties in detecting and identifying structural damage. The aim of this report is to detect damage with the presence of such operational and environmental condition changes through the application of the Los Alamos National Laboratory’s statistical pattern recognition paradigm for structural health monitoring (SHM). The test structure is a laboratory three-story building, and the damage is simulated through nonlinear effects introduced by a bumper mechanism that simulates a repetitive impact-type nonlinearity. The report reviews and illustrates various statistical principles that have had wide application in many engineering fields. The intent is to provide the reader with an introduction to feature extraction and statistical modelling for feature classification in the context of SHM. In this process, the strengths and limitations of some actual statistical techniques used to detect damage in the structures are discussed. In the hierarchical structure of damage detection, this report is only concerned with the first step of the damage detection strategy, which is the evaluation of the existence of damage in the structure. The data from this study and a detailed description of the test structure are available for download at: http://institute.lanl.gov/ei/software-and-data/.