Article

A Data Loss Recovery Technique using Compressive Sensing for Structural Health Monitoring Applications

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Recent developments in Wireless Sensor Networks (WSN) benefited various fields, among them Structural Health Monitoring (SHM) is an important application of WSNs. Using WSNs provides multiple advantages such as continuous monitoring of structure, lesser installation costs, fewer human inspections. However, because of the wireless medium, hardware faults, etc., data loss is an unavoidable consequence of WSNs. Recently, a new class of data loss recovery technique using Compressive Sensing (CS) is getting attention from the research community. In these methods, the transmitter sends encoded acceleration data and receiver uses a CS recovery method to recover the original signal. Usually, the encoding process uses a random measurement matrix which makes the process computationally complex to implement on sensor nodes. This paper presents a technique where the signal is encoded using Scrambled Identity Matrix. Using this method reduces the computational complexity and also robust to data loss. A performance analysis of the proposed technique is presented for random and continuous data loss. A comparison with the existing data loss recovery techniques is also shown using simulated data loss (both random and continuous data loss). It is observed that the proposed technique using Scrambled Identity Matrix can reconstruct the signals even after significant loss of data.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Model-free approach, which does not rely on any assigned finite element model, offers an alternative way for data reconstruction of realistic SHM application on complex structures [44][45][46][47][48][49][50][51][52]. The majority of model-free data reconstruction approaches for SHM applications were developed under the compressive sensing (CS) framework [45,46]. ...
... By employing the CS sampling theory, the sparse feature of the captured measurement was utilized to reconstruct the missing data based on nonuniform low-rate randomly sampled measurement. Bao et al. [47]; Zou et al. [48]; Thadikemalla and Gandhi [49] exploited the CS framework to recover the incoherent missing data of wireless sensors in SHM application on operating civil infrastructures. Huang et al. [50] proposed a robust Bayesian CS approach and it was extended for data reconstruction by adopting the sparse Bayesian learning mechanism [51]. ...
Article
In this study, a novel sequential broad learning (SBL) approach is proposed to reconstruct the missing signal of damaged sensors in structural health monitoring (SHM) sensory networks. It is capable to reconstruct the structural response and external excitation of linear/nonlinear time-varying dynamical systems under stationary/nonstationary excitation for sensory networks. The proposed approach is a model-free data-driven machine learning methodology and the data reconstruction is executed sequentially with moving time windows. The learning algorithm is developed by adopting the recently developed broad learning system (BLS) (Chen and Liu, 2018). In contrast to deep learning that suffers from excessive computational cost for training the stacks of hierarchical layers, BLS is established with a broadly expandable network and can be modified incrementally based on the inherited results from the previous trained architecture. Therefore, BLS provides a computationally very efficient alternative to deep learning. Taking the benefit of BLS, the proposed SBL approach can efficiently handle the massive data stream generated in long-term monitoring. To demonstrate the efficacy and applicability of the proposed approach, simulated examples that cover linear and nonlinear time-varying dynamical systems subjected to stationary/nonstationary wind-load/base excitation with different types of sensing devices are discussed. Moreover, the SHM database of the field measurement monitored from the MIT Green Building is utilized to examine the performance of the proposed approach in realistic application. It is demonstrated that the proposed approach offers a powerful data reconstruction tool for challenging data missing situations encountered in SHM.
... Các phương pháp truyền thống như phương pháp suy luận thống kê, phương pháp biểu diễn thưa... đã được sử dụng [5,6]. Mặc dù có ưu điểm về tính đơn giản và hiệu quả tính toán, những phương pháp này thường hoặc phải giả định tính tuyến tính, hoặc sử dụng phân phối chuẩn trong dữ liệu -những giả định không phù hợp với bản chất phi tuyến, phức tạp và phụ thuộc thời gian của tín hiệu dao động trong các hệ thống SHM [7]. ...
... The sensor system is typically carefully protected but still exposed to external environmental factors. Over time, due to various influences, the sensor system may encounter issues such as reduced sensitivity, signal loss, or even damage [13]. This poses potential risks of data loss. ...
Article
Full-text available
Structural health monitoring (SHM) plays a crucial role in ensuring the safety of infrastructure in general, especially critical infrastructure such as bridges. SHM systems allow the real-time monitoring of structural conditions and early detection of abnormalities. This enables managers to make accurate decisions during the operation of the infrastructure. However, for various reasons, data from SHM systems may be interrupted or faulty, leading to serious consequences. This study proposes using a Convolutional Neural Network (CNN) combined with Gated Recurrent Units (GRUs) to recover lost data from accelerometer sensors in SHM systems. CNNs are adept at capturing spatial patterns in data, making them highly effective for recognizing localized features in sensor signals. At the same time, GRUs are designed to model sequential dependencies over time, making the combined architecture particularly suited for time-series data. A dataset collected from a real bridge structure will be used to validate the proposed method. Different cases of data loss are considered to demonstrate the feasibility and potential of the CNN-GRU approach. The results show that the CNN-GRU hybrid network effectively recovers data in both single-channel and multi-channel data loss scenarios.
... MRI acquisitions, however, are prone to image distortion from motion artifacts, susceptibility artifacts, and other sources, which can impair image quality and diagnostic precision. Robust signal processing methods, such as compressed sensing and parallel imaging, are used to rebuild high-quality MRI pictures from distorted or undersampled data (Thadikemalla & Gandhi, 2018). ...
Article
Full-text available
In biomedical applications, reliable signal processing methods are essential for improving diagnosis and treatment plans. To enhance patient outcomes, this study intends to investigate machine learning, independent component analysis (ICA), wavelet transform, adaptive filtering, and other techniques in analyzing physiological data and medical imaging. The methodology entails a thorough analysis of the body of literature already in existence, with an emphasis on the concepts, practices, and uses of robust signal-processing methods in biological contexts. Important discoveries demonstrate how robust signal processing techniques can reduce artifacts and noise, extract valuable features, allow real-time feedback and monitoring, and aid in customized treatment planning. To fully realize the potential of robust signal processing techniques in healthcare, policy implications emphasize the significance of funding research and development, creating standards and guidelines, encouraging education and training, and cultivating cooperation and data sharing. Robust signal processing methods have the potential to completely transform the medical field by giving doctors insightful knowledge about physiological signals and imaging data, which will eventually enhance patient outcomes, diagnosis, and therapy.
... The CS-based method has been improved to recover lost data combined with machine learning techniques [25]. In CS, the signal is encoded using scrambled identity matrix to reduce the computational complexity [26]. A modified two-task learning algorithm has been used to enhance the accuracy of CS signal reconstruction [27]. ...
Article
Full-text available
This paper presents a novel inverse conjugate beam method (ICBM) to full-field static and dynamic strain measurement using long-gauge fiber Bragg grating (FBG) and vision sensors. By applying reverse analysis of conjugate beam theory, the ICBM establishes a displacement-strain transformation model that effectively explains the correlation between displacement, distributed strain and desired strain. The static and dynamic strain can be reconstructed directly by combining the monitored displacement and strain responses at the displacement monitoring locations. The vision sensor is employed to complement the installed long-gauge strain sensor for monitoring the displacement of the location without a long-gauge sensor. This method helps overcome the difficulty of monitoring the full-bridge strain due to insufficient sensors or inaccessible monitoring positions. At the same time, according to this method, it is necessary to use the displacement from the visual sensor to determine the residual stiffness of each unit as prior information for the ICBM. Both numerical studies and laboratory tests are carried out on a simply supported beam for conceptual verification. The results demonstrate that the proposed ICBM successfully achieves static and dynamic strain response reconstruction at displacement monitoring locations.
... One such approach involves CS techniques, which efficiently restore missing data by projecting the raw signal onto a random matrix. This mitigates information loss between wireless sensor nodes and the base station [19][20][21]. An extension of this, BCS, couples sparse Bayesian learning to reconstruct signals from a limited set of compressed measurements [22,23]. ...
Article
Full-text available
In efforts to resolve the limitations of incomplete bridge measurements in accurately identifying structural parameters, this study introduces a novel approach for precise measurement of full-field bridge strain and displacement using a limited number of long-gauge fiber bragg grating (FBG) strain sensors. This method consists of two algorithms: a double reconstruction algorithm for strain reconstruction and an improved conjugate beam algorithm for displacement identification. The dual reconstruction algorithm exploits proper orthogonal decomposition (POD) to establish a comprehensive mapping relationship between units, thereby achieving precise strain estimation. This intricate mapping process enables the algorithm to accurately compute strain responses. By leveraging the derived full-field strain data, the improved conjugate beam algorithm effectively captures complete displacement responses. The conventional sensor placement configuration monitors only a limited number of units. To enhance full-field measurement accuracy, this method categorizes unmonitored units into two levels based on sensor placement. The double reconstruction algorithm then estimates the strain response sequentially, contributing to enhanced precision. Numerical simulations validate the proposed method, which is demonstrated to be efficient and robust under various vehicle loads, impact loads, and noised levels. A physical experiment further demonstrates the efficacy of the proposed method in practice. The results underscore the potency of the proposed method as powerful theoretical and practical approach for full-field strain and displacement measurement.
... Numerous academics [5,14,15] have proposed various techniques for low-quality data recovery and produced successful recovery outcomes. Compressive sensing [16], a novel technique to signal processing, has recently attracted a lot of attention from researchers in the area of signal processing [17][18][19]. The fundamental process in compressive sensing is signal reconstruction, and there are three major categories of reconstruction algorithms: greedy algorithm, convex optimization algorithm, and combined optimization algorithm. ...
Article
Full-text available
The collected vibration signals of rotating machinery contain pulses, missing, and other low-quality anomalous data due to environmental noise interference, unstable data transmission, and data acquisition instrument failure. These low-quality data obstruct the analysis of the healthy operation condition of rotating machinery. This paper proposes a method for anomalous vibration signal detection and recovery based on the Local Outlier Factor algorithm and the Modified Sparsity Adaptive Matching Pursuit algorithm (LOF-MSAMP). The method combines the Local Outlier Factor algorithm and Compressive Sensing theory to realize anomalous vibration signal detection and recovery. This paper evaluates the recovery performance both qualitatively and quantitatively and discusses how the proposed method's hyperparameter selection affects the recovery results. A set of simulated signal and measured hob base signal are used to verify the proposed method. The results indicate that, when compared to the other seven reconstruction algorithms, the proposed method's recovered signal has the lower error level and the higher waveform similarity which reaches more than 98% to the original signal, effectively improving data quality.
... The measurement matrix (MM), which is the core content of CS, plays an important role in signal acquisition and reconstruction [25,26]. Currently, measurement matrices that satisfy the restricted isometry property (RIP) [27] are mainly divided into two categories: are deterministic and randomness matrices. ...
Article
Full-text available
Missing data caused by many factors such as equipment short circuits or data cleaning affect the accuracy of rotating machinery condition monitoring. To improve the precision of missing data recovery, a compressed sensing (CS)-based vibration data repair method is developed. Firstly, based on the Gaussian random matrix, an improved optimized measurement matrix (OMM) is proposed to accurately sample data. Then a sparse representation of the vibration signal through discrete cosine transform (DCT) is utilized to sparse the noisy vibration signal. Finally, the orthogonal matching pursuit (OMP) algorithm is employed to reconstruct the missing signal. The effectiveness of the proposed method is verified by analyzing constant and variable speed time series of rolling bearings. Compared with other data repair methods, it is highlighted that OMM has a higher repair precision at different loss rates.
... The data driven methods, especially machine learning algorithms, have attracted significant attentions in structural response reconstruction. The previous studies have shown that the traditional machine learning based methods , Thadikemalla and Gandhi 2018 are available for response reconstruction, but the accuracy is unsatisfactory for practical structures that are large and complex. Recently, with the rapid development of computational power and machine learning algorithms, especially deep learning, the research of data driven structural response reconstruction is highly emphasized. ...
Article
Full-text available
Inevitable response loss under complex operational conditions significantly affects the integrity and quality of measured data, leading the structural health monitoring (SHM) ineffective. To remedy the impact of data loss, a common way is to transfer the recorded response of available measure point to where the data loss occurred by establishing the response mapping from measured data. However, the current research has yet addressed the structural condition changes afterward and response mapping learning from a small sample. So, this paper proposes a novel data driven structural response reconstruction method based on a sophisticated designed generating adversarial network (UAGAN). Advanced deep learning techniques including U-shaped dense blocks, self-attention and a customized loss function are specialized and embedded in UAGAN to improve the universal and representative features extraction and generalized responses mapping establishment. In numerical validation, UAGAN efficiently and accurately captures the distinguished features of structural response from only 40 training samples of the intact structure. Besides, the established response mapping is universal, which effectively reconstructs responses of the structure suffered up to 10% random stiffness reduction or structural damage. In the experimental validation, UAGAN is trained with ambient response and applied to reconstruct response measured under earthquake. The reconstruction losses of response in the time and frequency domains reached 16% and 17%, that is better than the previous research, demonstrating the leading performance of the sophisticated designed network. In addition, the identified modal parameters from reconstructed and the corresponding true responses are highly consistent indicates that the proposed UAGAN is very potential to be applied to practical civil engineering.
... On the other hand, data-driven structural response reconstruction, which directly extracts the transmissibility function from the measured response, is receiving increasing research attention. Among traditional data-driven methods, the classic compressive sampling [17,18] based response reconstruction methods are effective in recovering short missing segments. By combining compressive sampling techniques with sparse Bayesian learning, Huang et al. [19] proposed the Bayesian compressive sensing (BCS) method, which reduced the number of sensors required by higher compression ratios. ...
Article
In structural health monitoring (SHM) of civil engineering structures, loss of measured structural responses inevitably occurs in practice, especially when structures encounter extreme loads. The loss of measurements severely undermines the completeness of the collected structural information and the reliability of structural condition assessment. Therefore, timely and accurate recovery of lost data is of paramount importance. This paper proposes a novel approach based on a self-attention mechanism enhanced generative adversarial network (SAGAN) for learning the intrinsic correlations between responses and reconstructing the lost data based on the accurately measured ones. SAGAN innovatively embeds the self-attention mechanism in the computational flow to facilitate the extraction of spatial and even temporal correlations among structural responses. In the experimental validations, the reconstructed responses of Guangzhou New Television Tower (GNTT) show great agreement with the true responses in forms of both time sequences and Fourier spectra. SAGAN is also versatile, demonstrating its effectiveness and robustness by reconstructing the responses competently under both ambient and typhoon excitations. In addition, by visualizing and analyzing the internal matrices and feature maps of SAGAN, it is found that the self-attention module benefits the learning of data features and improves the establishment of mappings between responses. The suitability of the proposed approach for SHM related tasks is validated by extremely consistent modal parameters. The identified natural frequencies from the reconstructed and the corresponding true responses are identical, and the Coordinate Modal Assurance Criterion (COMAC) value reaches to 99.98%.
... Many researchers also studied the effect of missing data recovery using CS. It was found it is sufficient to reconstruct temporal signals by capturing little information based on sampling theorem and sparsity assumptions [22,23]. Another kind of method was developed on Compressive Sensing that conducts the response reconstruction with non-uniform data loss mode [24]. ...
Article
Full-text available
In structural health monitoring (SHM) systems, sensors are important to collect structural responses to assess the load-resistant capacity and health status of structures. However, data loss of these sensors is sometimes inevitable due to communication outages and malfunction, which will result in incorrect diagnosis of structural health status. Conventional lost data imputation methods fundamentally have low implementing efficiency owing to incompact neurons network structures. This paper proposed a slim generative adversarial imputation network (SGAIN) to recover the missing deflection data for bridge SHM systems. This framework uses slim neural networks with a generator-discriminator architecture to capture the valuable information from the non-missing parts of fault sensor and other normal sensors. Based on analysis on the long-term measured deflection data under the ambient and dynamic excitation of a highway-railway dual-purpose bridge, the proposed method shows the efficient and accurate imputation performance under the input scenarios with different correlation levels. The comparative results show the proposed SGAIN significantly outperforms the conventional GAIN model for all three scenarios with different missing rates. The execution velocity of SGAIN model also outperforms GAIN with 10% to 20% increment for three type scenarios. Such improvement is significantly valuable and it is believed that SGAIN could also have a good imputation capability when transferred to other valuable scenarios.
... Ni and Li (2016) used neural network technology to reconstruct the missing wind pressure monitoring data of a high-rise building. In addition, the compressed sensing algorithm and the improved algorithms based on it were widely used in the recovery of missing data in SHM systems (Thadikemalla and Gandhi 2018, Huang et al. 2016, Bao et al. 2018. ...
Article
Full-text available
Benefiting from the massive monitoring data collected by the Structural health monitoring (SHM) system, scholars can grasp the complex environmental effects and structural state during structure operation. However, the monitoring data is often missing due to sensor faults and other reasons. It is necessary to study the recovery method of missing monitoring data. Taking the structural temperature monitoring data of Nanjing Dashengguan Yangtze River Bridge as an example, the long short-term memory (LSTM) network-based recovery method for missing structural temperature data is proposed in this paper. Firstly, the prediction results of temperature data using LSTM network, support vector machine (SVM), and wavelet neural network (WNN) are compared to verify the accuracy advantage of LSTM network in predicting time series data (such as structural temperature). Secondly, the application of LSTM network in the recovery of missing structural temperature data is discussed in detail. The results show that: the LSTM network can effectively recover the missing structural temperature data; incorporating more intact sensor data as input will further improve the recovery effect of missing data; selecting the sensor data which has a higher correlation coefficient with the data we want to recover as the input can achieve higher accuracy.
... Vibration signals with low damping factors are sparse in the Fourier domain [24]. For these reasons, the Discrete Cosine Transform (DCT) or the Discrete Fourier Transform (DFT) matrices have been conventionally chosen as sparsifying bases [10], [25]. Alternatively, Wavelet bases have been proposed [26] to better track non-stationary phenomena. ...
Article
The main challenge in the implementation of long-lasting vibration monitoring systems is to tackle the complexity of modern 'mesoscale' structures. Thus, the design of energy aware solutions is promoted for the joint optimization of data sampling rates, on-board storage requirements, and communication data payloads. In this context, the present work explores the feasibility of the model-assisted rakeness-based compressed sensing (MRak-CS) approach to tune the sensing mechanism on the second-order statistics of measured data by pivoting on numerical priors. Moreover, a signal-adapted sparsity basis relying on the Wavelet Packet Transform is conceived, which aims at maximizing the signal sparsity while allowing for a precise time-frequency localization. The adopted solutions were tested with experiments performed on a sensorized pinned-pinned steel beam. Results prove that the proposed compression strategies are superior to conventional eigenvalue approaches and to standard CS methods.The achieved compression ratio is equal to 7 and the quality of the reconstructed structural parameters is preserved even in presence of defective configurations.
... Recently, a new matrix, named Scrambled Identity Matrix, has been proposed in the context of CS-based data loss recovery in SHM [15]. They used row-wise permutation of the identity matrix to form the projection matrix. ...
Article
Compressive Sensing (CS) is a novel signal sampling technique that can enhance the resiliency of the data transmission process in Structural Health Monitoring (SHM) systems by projecting the raw signal into another domain and transmitting the projected signal instead of the original one. The original signal can later be recovered in the fusion center using the received projected vector and the corresponding projection matrix. In this study, data loss recovery of multiple signals with variable loss ratio is investigated. Unlike previously proposed CS-based single-signal approaches, the inter-correlation across vibration signals is exploited in this study. The inter-correlation type for vibration signals is characterized as having common sparse supports (the same location of non-zero elements in the sparse domain) and is dealt with in the context of Distributed Compressive Sensing (DCS). In this approach, vibration signals are encoded separately without extra inter-sensor communication overhead and are decoded jointly. Adopting the DCS-based approach enabled handling non-uniform data loss pattern across channels by allowing the use of a different projection matrix for each channel. Furthermore, a new projection matrix, named Permuted Random Demodulator (PRD), is proposed that not only reduces the coherence of the sensing matrix and enhances the reconstruction accuracy, but also makes the proposed approach robust to continuous data loss. The performance of the proposed method is evaluated using acceleration responses of a real-life bridge structure under traffic excitation. Modal parameters of the bridge are also identified using the recovered signals with reasonable accuracy.
... For k−sparse vector recovery, N k trials are required to find the sparsest vector, increasing computational complexity. Among various sparse recovery techniques such as basis pursuit, non-complex optimization etc. iterative greedy algorithms like OMP are found to be faster and have been proposed for accelerometer based SHM in WSNs [5]- [7]. In this algorithm, the original N-dimensional sparse signal X N ×1 is encoded using sensing matrix φ N ×N , resulting in a measurement vector of same length as the original signal, Y N ×1 : ...
Article
During the structural health monitoring of bridges, it has been observed that the vibration data collected can sometimes be randomly lost or sampled non-uniformly. This leads to a low signal-to-noise ratio in the spectral functions of the measured data, making it difficult to identify weak modes. To address this issue, a framework for operational modal identification is proposed in this study. It utilizes the fast Bayesian fast Fourier transform (FFT) method to estimate the modal parameters of highway bridges considering the non-uniform monitoring data. The initial frequency parameters for the fast Bayesian FFT approach are automatically determined using the proposed autoregressive (AR) power spectral density (PSD)-guided peak picking method. This overcomes the challenge of capturing initial frequencies related to weakly contributed modes. Additionally, the bandwidth parameter for each mode is determined using the modal assurance criterion (MAC) of the first left singular vectors of PSD matrices. Furthermore, when analyzing non-uniform vibration data, it is recommended to use the non-uniform FFT (NUFFT) for calculating PSD functions in order to improve identification accuracy. The proposed method is validated using acceleration data from both a numerical model and a real-world bridge. The results demonstrate that the identification uncertainty of modal parameters increases with higher non-uniform levels.
Article
Full-text available
The article discusses the development of mathematical support for the recovery of the values of discrete-time sequence samples obtained as a result of uniform sampling of a continuous signal. The recovery problem of discrete-time sequence samples is solved for a signal that can be considered stationary or stationary at least in a broad sense (quasi-stationary). The development of mathematical support for the recovery of the values of signal samples was carried out on the basis of constructing a moving average model and estimating the correlation of signal samples over time with forward and reverse forecasting. Estimates of the signal correlation function necessary to recover sample sections with lost values are calculated from samples with known values. Correlation function estimates can be calculated regardless of the location of the recovery area when the condition of stationarity of the signal is met. The obtained estimates of the correlation function samples can be used for both forward and reverse forecasting. Moreover, even if it is necessary to recover several problem sections, it is enough to calculate only once the sample of correlation function estimates necessary for their restoration. The resulting mathematical solution to the problem became the basis for the development of algorithmic support. Test tests and functional checks of the algorithmic support were carried out on the basis of simulation using a signal model representing an additive sum of harmonic components with random initial phases. The simulation results showed that the calculation of estimates of the lost sample values is carried out with a fairly low error, both in forward and reverse forecasting, as well as when they are used together. In practice, the choice of a sequence recovery algorithm based on forward or reverse forecasting will be determined based on the actual conditions of its processing. In particular, if previous samples with known values are not enough to carry out forward forecasting, then the reverse forecasting procedure is implemented and vice versa. The developed algorithmic support can be implemented in the form of metrologically significant software for digital signal processing systems.
Article
Wheelset bearings fault samples of high-speed trains often suffer from insufficient numbers and missing points. However, working conditions are one of the most important factors affecting data augmentation and repair, which is not sufficiently emphasized by existing methods. And GANs as effective sample generation methods still have many problems such as model collapse and training difficulties. To solve these problems, a data augmentation and data repair method for high-speed train wheelset bearing fault diagnosis with multi-speed generation capability is proposed. By adding an independent speed classifier, the speed conditions of samples are classified. The relation among samples is also introduced to improve the accuracy of the classifier. A new loss function is designed to guide to generate samples of specified faults and speeds. Finally, a sample repair method based on the pre-training generator is studied to improve the utilization rate of real samples. The wheelset bearing dataset is utilized to validate that the proposed method can generate and repair samples for different speeds and fault types.
Article
Effective restoration of lossy data is prerequisite to guarantee the reliability of early warning information for damage diagnosis, considering large amounts of sensor data to be collected, transmitted and stored in long-term structural health monitoring (SHM) process. In this study, a compressive sampling matching pursuit (CoSaMP) algorithm was developed for solving the restoration problem of electromechanical admittance (EMA, inverse of impedance) signatures-based damage identification. Observed vectors through a random matrix projection were collected as a substitute of the EMA signatures used in conventional method, and EMA signature restorations from incomplete observed vectors were transformed into solving constrained optimization problems. Then the CoSaMP algorithm was utilized to recover EMA signatures with loss in observed vectors. Validation of the approach was conducted by a series of tests on artificial crack identification for a standardized concrete cube under laboratory conditions, which was compared with that using iterative shrinkage-thresholding algorithm (ISTA)-based algorithm. Additionally, impacts of different loss ratios on restoration accuracy were discussed. The proposed approach showed promising availability for damage identification using EMA technique encountered with lossy data.
Conference Paper
Full-text available
Sparse sampling, also known as compressed sampling or compressed sensing (CS), is a new signal processing technique that samples the signal with considerably fewer samples than Nyquist's sampling theorem. Traditional sampling methods will not always be feasible, needing a strategy that overcomes the limitations of present approaches. Compressive sensing reduces complexity, speeds up processing, and improves storage efficiency. Compressive Sensing's reconstruction procedures ensure that the original signal is accurately restored. According to a recent study, CS outperforms other compression algorithms in terms of efficiency. This study aims to present an overview of existing Compressive Sensing approaches so that researchers can better understand existing limitations and seek to improve accuracy and precision.
Article
Compressive sensing (CS) has been widely explored for data compression and signal recovery in presence of lossy transmission in structural health monitoring (SHM) applications. Discussions of lost data recovery using CS reported in literature are typically limited to acceleration signals obtained from vibration based SHM systems. Moreover these reports limit the study to performance analysis of recovery of signals in time domain, while feasibility of these algorithm on subsequent damage analysis using recovered signals remains unexplored. A systematic evaluation of performance of CS based signal recovery for algorithmic estimation of damage index (DI) in ultrasound SHM systems is important for determining their practicality for automated SHM applications. In this paper, we study the feasibility of DI estimation in ultrasonic guided wave testing of honeycomb composite sandwich structures (HCSS) using signals recovered from lossy sensor recordings. We emulate signal loss by masking the sensor recordings in an experimentally measured dataset comprising of an HCSS panel with two defects (disbond and high density (HD) core) instrumented with eight piezoelectric wafer and employ orthogonal matching pursuit (OMP) based signal recovery algorithm. Our analysis suggests that while OMP-based signal recovery algorithm is a robust and reliable signal recovery technique, producing signal reconstruction errors lesser than 8.4 % for data loss as high as 50 %, the magnitude error in DI estimation is significant and varies for different signal difference coefficient (SDC) algorithms. We propose alternate SDC definition, SDCPA, computed using peak amplitude of the Hilbert transform (HT), that shows consistently less error than the conventional cumulative-sum-based SDC definition for the HCSS case study. Further we study trends of error in recovery of lossy time domain signals as well as DI computation as a function of data loss parameters, for both random as well as continuous data loss. Our findings indicate that conventional DI computation algorithms for ultrasonic SHM need to be revisited when used in compressive sensing paradigm.
Chapter
The huge amount of data to be processed is still a challenge for the current Structural Heath Monitoring (SHM) sensor networks and a primary reason for the development of hardware–oriented signal processing techniques. In the specific case of vibration–based inspections, the sparse spectral distribution of structures in dynamic regime have made the Compressed Sensing (CS) paradigm a compelling solution. In this work, the on–sensor deployment of data recovery procedures is tackled from an edge–computing perspective, aiming at selecting the most performing strategy which allows for the joint optimization of implied memory storage, latency and consistency of the retrieved structural information. An experimental campaign conducted on a steel beam undergoing ground motion excitation revealed that the Orthogonal Matching Pursuit (OMP) strategy might be a promising candidate for sensor deployment, since it attains the highest reconstruction levels while minimizing the associated memory/time cost.
Article
Full-text available
Structural health monitoring (SHM) technology for surveillance and evaluation of existing and newly built long-span bridges has been widely developed, and the significance of the technique has been recognized by many administrative authorities. The paper reviews the recent progress of the SHM technology that has been applied to long-span bridges. The deployment of a SHM system is introduced. Subsequently, the data analysis and condition assessment including techniques on modal identification, methods on signal processing, and damage identification were reviewed and summarized. A case study about a SHM system of a long-span arch bridge (the Jiubao bridge in China) was systematically incorporated in each part to advance our understanding of deployment and investigation of a SHM system for long-span arch bridges. The applications of SHM systems of long-span arch bridge were also introduced. From the illustrations, the challenges and future trends for development a SHM system were concluded.
Article
Full-text available
In wireless data transmissions processes, data loss is an important factor that reduces the robustness of wireless sensor networks (WSNs). In many practical engineering applications, data loss compensation algorithms are then required to maintain the robustness, and such algorithms are typically based on compressive sensing with a large memory of microcontrollers needed. This paper presents an improved algorithm, based on random demodulator, to overcome the difficulty of microcontroller-dependence in the traditional data loss algorithms. The newly developed algorithm demonstrates the following advantages: 1) low space complexity; 2) low floating-point calculations; and 3) low time complexity. Therefore, it is more suitable to be embedded into ordinary nodes, comparing with the traditional algorithms. In this paper, a WSN based on WiFi is also developed for verifying the effectiveness and feasibility of the proposed algorithm. Field experiment on Xinghai Bay Bridge is done. Experimental results show that the WSN works properly and steady. Moreover, the data loss can be compensated effectively and efficiently through the use of the present algorithm.
Article
Full-text available
Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links.
Article
Full-text available
In practical structural health monitoring (SHM) process based on wireless sensor network (WSN), data loss often occurs during the data transmission between sensor nodes and the base station, which will affect the structural data analysis and subsequent decision making. In this paper, a method of recovering lost data in WSN based on compressive sensing (CS) is proposed. Compared with the existing methods, it is a simple and stable data recovery method and can obtain lower recovery data error for one-dimensional SHM's data loss. First, response signal x is measured onto the measurement data vector y through inner products with random vectors. Note that y is the linear projection of x and y is permitted to be lost in part during the transmission. Next, when the base station receives the incomplete data, the response signal x can be reconstructed from the data vector y using the CS method. Finally, the test of active structural damage identification on LF-21M aviation antirust aluminum plate is proposed. The response signal gathered from the aluminum plate is used to verify the data recovery ability of the proposed method.
Article
Full-text available
In a wireless sensor network, data loss often occurs during the data transmission between the wireless sensor nodes and the base station. In the wireless sensor network applications for civil structural health monitoring, the errors caused by data loss inevitably affect the data analysis of the structure and subsequent decision making. This article explores a novel application of compressive sampling to recover the lost data in a wireless sensor network used in structural health monitoring. The main idea in this approach is to first perform a linear projection of the transmitted data x onto y by a random matrix and subsequently to transmit the data y to the base station. The original data x are then reconstructed on the base station from the data y using the compressive sampling method. The acceleration time series collected by the field test on the Jinzhou West Bridge and the Structural Health Monitoring System on the National Aquatics Center in Beijing are employed to validate the accuracy of the proposed data loss recovery approach. The results indicate that good recovery accuracy can be obtained if the original data have a sparse characteristic in some orthonormal basis, whereas the recovery accuracy is degraded when the original data are not sparse in the orthonormal basis.
Article
Full-text available
The application of compressive sensing (CS) to structural health monitoring is an emerging research topic. The basic idea in CS is to use a specially-designed wireless sensor to sample signals that are sparse in some basis (e.g. wavelet basis) directly in a compressed form, and then to reconstruct (decompress) these signals accurately using some inversion algorithm after transmission to a central processing unit. However, most signals in structural health monitoring are only approximately sparse, i.e. only a relatively small number of the signal coefficients in some basis are significant, but the other coefficients are usually not exactly zero. In this case, perfect reconstruction from compressed measurements is not expected. A new Bayesian CS algorithm is proposed in which robust treatment of the uncertain parameters is explored, including integration over the prediction-error precision parameter to remove it as a "nuisance" parameter. The performance of the new CS algorithm is investigated using compressed data from accelerometers installed on a space-frame structure and on a cable-stayed bridge. Compared with other state-of-the-art CS methods including our previously-published Bayesian method which uses MAP (maximum a posteriori) estimation of the prediction-error precision parameter, the new algorithm shows superior performance in reconstruction robustness and posterior uncertainty quantification. Furthermore, our method can be utilized for recovery of lost data during wireless transmission, regardless of the level of sparseness in the signal.
Article
Full-text available
Smart sensors densely distributed over structures can provide rich information for structural monitoring using their onboard wireless communication and computational capabilities. However, issues such as time synchronization error, data loss, and dealing with large amounts of harvested data have limited the implementation of full-fledged systems. Limited network resources (e.g. battery power, storage space, bandwidth, etc.) make these issues quite challenging. This paper first investigates the effects of time synchronization error and data loss, aiming to clarify requirements on synchronization accuracy and communication reliability in SHM applications. Coordinated computing is then examined as a way to manage large amounts of data.
Article
Full-text available
Compressed sensing (CS) is a powerful new data acquisition paradigm that seeks to accurately reconstruct unknown sparse signals from very few (relative to the target signal dimension) random projections. The specific objective of this study is to save wireless sensor energy by using CS to simultaneously reduce data sampling rates, on-board storage requirements, and communication data payloads. For field-deployed low power wireless sensors that are often operated with limited energy sources, reduced communication translates directly into reduced power consumption and improved operational reliability. In this study, acceleration data from a multi-girder steel-concrete deck composite bridge are processed for the extraction of mode shapes. A wireless sensor node previously designed to perform traditional uniform, Nyquist rate sampling is modified to perform asynchronous, effectively sub-Nyquist rate sampling. The sub-Nyquist data are transmitted off-site to a computational server for reconstruction using the CoSaMP matching pursuit recovery algorithm and further processed for extraction of the structure’s mode shapes. The mode shape metric used for reconstruction quality is the modal assurance criterion (MAC), an indicator of the consistency between CS and traditional Nyquist acquired mode shapes. A comprehensive investigation of modal accuracy from a dense set of acceleration response data reveals that MAC values above 0.90 are obtained for the first four modes of a bridge structure when at least 20% of the original signal is sampled using the CS framework. Reduced data collection, storage and communication requirements are found to lead to substantial reductions in the energy requirements of wireless sensor networks at the expense of modal accuracy. Specifically, total energy reductions of 10–60% can be obtained for a sensor network with 10–100 sensor nodes, respectively. The reduced energy requirements of the CS sensor nodes are shown to directly result in improved battery life and communication reliability.
Article
Full-text available
Civil infrastructures are critical to every nation, due to their substantial investment, long service period, and enormous negative impacts after failure. However, they inevitably deteriorate during their service lives. Therefore, methods capable of assessing conditions and identifying damages in a structure timely and accurately have drawn increasing attentions. Recently, compressive sensing (CS), a significant breakthrough in signal processing, has been proposed to capture and represent compressible signals at a rate significantly below the traditional Nyquist rate. Due to its sound theoretical background and notable influence, this methodology has been successfully applied in many research areas. In order to explore its application in structural damage identification, a new CS based damage identification scheme is proposed in this paper, by regarding damage identification problems as pattern classification problems. The time domain structural responses are transferred to the frequency domain as sparse representation, and then the numerical simulated data under various damage scenarios will be used to train a feature matrix as input information. This matrix can be used for damage identification through an optimization process. This will be one of the first few applications of this advanced technique to structural engineering areas. In order to demonstrate its effectiveness, numerical simulation results on a complex pipe soil interaction model are used to train the parameters and then to identify the simulated pipe degradation damage and free-spanning damage. To further demonstrate the method, vibration tests of a steel pipe laid on the ground are carried out. The measured acceleration time histories are used for damage identification. Both numerical and experimental verification results confirm that the proposed damage identification scheme will be a promising tool for structural health monitoring.
Article
Full-text available
In recent years, there has been an increasing interest in the adoption of emerging sensing technologies for instrumentation within a variety of structural systems. Wire- less sensors and sensor networks are emerging as sensing paradigms that the structural engineering field has begun to consider as substitutes for traditional tethered monitoring systems. A benefit of wireless structural monitoring systems is that they are inexpensive to install because extensive wir- ing is no longer required between sensors and the data acquisition system. Researchers are discovering that wire- less sensors are an exciting technology that should not be viewed as simply a substitute for traditional tethered monitor- ing systems. Rather, wireless sensors can play greater roles in the processing of structural response data; this feature can be utilized to screen data for signs of structural damage. Also, wireless sensors have limitations that require novel system architectures and modes of operation. This paper is intended to serve as a summary review of the collective experience the structural engineering community has gained from the use of wireless sensors and sensor networks for monitoring structural performance and health.
Article
Full-text available
Suppose a non-random portion of a data vector is missing. With some minimal prior knowledge about the data vector, can we recover the missing portion from the available one? In this paper, we consider a linear programming approach to this problem, present numerical evidence suggesting the eectiveness and limitation of this approach, and give deterministic conditions that guarantee a successful recovery. Our theoretical results, though related to recent results in compressive sensing, do not rely on randomization.
Article
Full-text available
The primary objective of this paper is to develop output only modal identification and structural damage detection. Identification of multi-degree of freedom (MDOF) linear time invariant (LTI) and linear time variant (LTV—due to damage) systems based on Time-frequency (TF) techniques—such as short-time Fourier transform (STFT), empirical mode decomposition (EMD), and wavelets—is proposed. STFT, EMD, and wavelet methods developed to date are reviewed in detail. In addition a Hilbert transform (HT) approach to determine frequency and damping is also presented. In this paper, STFT, EMD, HT and wavelet techniques are developed for decomposition of free vibration response of MDOF systems into their modal components. Once the modal components are obtained, each one is processed using Hilbert transform to obtain the modal frequency and damping ratios. In addition, the ratio of modal components at different degrees of freedom facilitate determination of mode shape. In cases with output only modal identification using ambient/random response, the random decrement technique is used to obtain free vibration response. The advantage of TF techniques is that they are signal based; hence, can be used for output only modal identification. A three degree of freedom 1:10 scale model test structure is used to validate the proposed output only modal identification techniques based on STFT, EMD, HT, wavelets. Both measured free vibration and forced vibration (white noise) response are considered. The secondary objective of this paper is to show the relative ease with which the TF techniques can be used for modal identification and their potential for real world applications where output only identification is essential. Recorded ambient vibration data processed using techniques such as the random decrement technique can be used to obtain the free vibration response, so that further processing using TF based modal identification can be performed.
Article
Full-text available
Sparsity of representations of signals has been shown to be a key concept of fundamental importance in fields such as blind source separation, compression, sampling and signal analysis. The aim of this paper is to compare several commonly-used sparsity measures based on intuitive attributes. Intuitively, a sparse representation is one in which a small number of coefficients contain a large proportion of the energy. In this paper, six properties are discussed: (Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies), each of which a sparsity measure should have. The main contributions of this paper are the proofs and the associated summary table which classify commonly-used sparsity measures based on whether or not they satisfy these six propositions. Only two of these measures satisfy all six: the pq-mean with p les 1, q > 1 and the Gini index.
Conference Paper
Full-text available
Data loss in wireless communications greatly affects the reconstruction quality of wirelessly transmitted images. Conventionally, channel coding is performed at the encoder to enhance recovery of the image by adding known redundancy. While channel coding is effective, it can be very computationally expensive. For this reason, a new mechanism of handling data losses in wireless multimedia sensor networks (WMSN) using compressed sensing (CS) is introduced in this paper. This system uses compressed sensing to detect and compensate for data loss within a wireless network. A combination of oversampling and an adaptive parity (AP) scheme are used to determine which CS samples contain bit errors, remove these samples and transmit additional samples to maintain a target image quality. A study was done to test the combined use of adaptive parity and compressive oversampling to transmit and correctly recover image data in a lossy channel to maintain Quality of Information (QoI) of the resulting images. It is shown that by using the two components, an image can be correctly recovered even in a channel with very high loss rates of 10%. The AP portion of the system was also tested on a software defined radio testbed. It is shown that by transmitting images using a CS compression scheme with AP error detection, images can be successfully transmitted and received even in channels with very high bit error rates.
Conference Paper
Full-text available
Data loss in wireless sensing applications is inevitable and while there have been many attempts at coping with this issue, recent developments in the area of Compressive Sensing (CS) provide a new and attractive perspective. Since many physical signals of interest are known to be sparse or compressible, employing CS, not only compresses the data and reduces effective transmission rate, but also improves the robustness of the system to channel erasures. This is possible because reconstruction algorithms for compressively sampled signals are not hampered by the stochastic nature of wireless link disturbances, which has traditionally plagued attempts at proactively handling the effects of these errors. In this paper, we propose that if CS is employed for source compression, then CS can further be exploited as an application layer erasure coding strategy for recovering missing data. We show that CS erasure encoding (CSEC) with random sampling is efficient for handling missing data in erasure channels, paralleling the performance of BCH codes, with the added benefit of graceful degradation of the reconstruction error even when the amount of missing data far exceeds the designed redundancy. Further, since CSEC is equivalent to nominal oversampling in the incoherent measurement basis, it is computationally cheaper than conventional erasure coding. We support our proposal through extensive performance studies.
Article
Full-text available
In remote monitoring of Electrocardiogram (ECG), it is very important to ensure that the diagnostic integrity of signals is not compromised by sensing artifacts and channel errors. It is also important for the sensors to be extremely power efficient to enable wearable form factors and long battery life. We present an application of Compressive Sensing (CS) as an error mitigation scheme at the application layer for wearable, wireless sensors in diagnostic grade remote monitoring of ECG. In our previous work, we described an approach to mitigate errors due to packet losses by projecting ECG data to a random space and recovering a faithful representation using sparse reconstruction methods. Our contributions in this work are twofold. First, we present an efficient hardware implementation of random projection at the sensor. Second, we validate the diagnostic integrity of the reconstructed ECG after packet loss mitigation. We validate our approach on MIT and AHA databases comprising more than 250,000 normal and abnormal beats using EC57 protocols adopted by the Food and Drug Administration (FDA). We show that sensitivity and positive predictivity of a state-of-the-art ECG arrhythmia classifier is essentially invariant under CS based packet loss mitigation for both normal and abnormal beats even at high packet loss rates. In contrast, the performance degrades significantly in the absence of any error mitigation scheme, particularly for abnormal beats such as Ventricular Ectopic Beats (VEB).
Conference Paper
Full-text available
Compressive sensing is a new data acquisition technique that aims to measure sparse and compressible signals at close to their intrinsic information rate rather than their Nyquist rate. Recent results in compressive sensing show that a sparse or compressible signal can be reconstructed from very few incoherent measurements. Although the sampling and reconstruction process is robust to measurement noise, all current reconstruction methods assume some knowledge of the noise power or the acquired signal to noise ratio. This knowledge is necessary to set algorithmic parameters and stopping conditions. If these parameters are set incorrectly, then the reconstruction algorithms either do not fully reconstruct the acquired signal (underfitting) or try to explain a significant portion of the noise by distorting the reconstructed signal (overfitting). This paper explores this behavior and examines the use of cross validation to determine the stopping conditions for the optimization algorithms. We demonstrate that by designating a small set of measurements as a validation set it is possible to optimize these algorithms and reduce the reconstruction error. Furthermore we explore the trade-off between using the additional measurements for cross validation instead of reconstruction.
Article
Recently, compressive sensing based data loss recovery techniques have become popular for Structural Health Monitoring (SHM) applications. These techniques involve an encoding process which is onerous to sensor node because of random sensing matrices used in compressive sensing. In this paper, we are presenting a model where the sampled raw acceleration data is directly transmitted to base station/receiver without performing any type of encoding at transmitter. The received incomplete acceleration data after data losses can be reconstructed faithfully using compressive sensing based reconstruction techniques. An in-depth simulated analysis is presented on how random losses and continuous losses affects the reconstruction of acceleration signals (obtained from a real bridge). Along with performance analysis for different simulated data losses (from 10 to 50%), advantages of performing interleaving before transmission are also presented.
Article
The reliable extraction of structural characteristics, such as modal information, from operating structural systems allows for the formation of indicators tied to structural performance and condition. Within this context, reliable monitoring systems and associated processing algorithms need be developed for a robust, yet cost-effective, extraction. Wireless Sensor Networks (WSNs) have in recent years surfaced as a promising technology to this end. Currently operating WSNs are however bounded by a number of restrictions relating to energy self-sustainability and energy data transmission costs, especially when applied within the context for vibration monitoring. The work presented herein proposes a remedy to heavy transmission costs by optimally combining the spectro-temporal information, which is already present in the signal, with a recently surfaced compressive sensing paradigm resulting in a robust signal reconstruction technique, which allows for reliable identification of modal shapes. To this end, this work outlines a step-by-step process for response time-series recovery from partially transmitted spectro-temporal information. The framework is validated on synthetic data generated for a benchmark structure of the American Society of Civil Engineers. On the basis of this example, this work further provides a cost analysis in comparison to fully transmitting wireless and tethered sensing solutions.
Article
Traditional compressive sensing (CS) methods assume the data sparsity to be constant over time, which holds well in many long-term scenarios. However, the authors' recent study on electrocardiogram (ECG) monitoring reveals that data sparsity varies dramatically for real-time monitoring systems where the data latency must be bounded, due to limited data collected within the delay bound. The variation of data sparsity makes the reconstruction error (RE) unstable. Furthermore, the variation of wireless channel quality also impacts the reconstruction quality. To accommodate both variations, this study proposes a novel adaptive feedback architecture for real-time wireless ECG monitoring based on the CS technique, which can bound the REs in the presence of the variations of data sparsity and wireless channel. An experiment testbed has been built to evaluate the performance of the proposed system. The results show that the data latency can be limited to <300 ms and the RE can be controlled to 9%.
Article
The measurement of far-field radiation patterns is time consuming and expensive. Therefore, a novel technique that reduces the samples required to measure radiation patterns is proposed where random far-field samples are measured to reconstruct two-dimensional (2D) or three-dimensional (3D) far-field radiation patterns. The proposed technique uses a compressive sensing algorithm to reconstruct radiation patterns. The discrete Fourier transform (DFT) or the discrete cosine transform (DCT) are used as the sparsity transforms. The algorithm was evaluated by using 3 antennas modeled with the High-Frequency Structural Simulator (HFSS) — a half-wave dipole, a Vivaldi, and a pyramidal horn. The root mean square error (RMSE) and the number of measurements required to reconstruct the antenna pattern were used to evaluate the performance of the algorithm. An empirical test case was performed that validates the use of compressive sensing in 2D and 3D radiation pattern reconstruction. Numerical simulations and empirical tests verify that the compressive sensing algorithm can be used to reconstruct radiation patterns, reducing the time and number of measurements required for good antenna pattern measurements.
Article
Randomly missing data of structural vibration responses time history often occurs in structural dynamics and health monitoring. For example, structural vibration responses are often corrupted by outliers or erroneous measurements due to sensor malfunction; in wireless sensing platforms, data loss during wireless communication is a common issue. Besides, to alleviate the wireless data sampling or communication burden, certain accounts of data are often discarded during sampling or before transmission. In these and other applications, recovery of the randomly missing structural vibration responses from the available, incomplete data, is essential for system identification and structural health monitoring; it is an ill-posed inverse problem, however. This paper explicitly harnesses the data structure itself—of the structural vibration responses—to address this (inverse) problem. What is relevant is an empirical, but often practically true, observation, that is, typically there are only few modes active in the structural vibration responses; hence a sparse representation (in frequency domain) of the single-channel data vector, or, a low-rank structure (by singular value decomposition) of the multi-channel data matrix. Exploiting such prior knowledge of data structure (intra-channel sparse or inter-channel low-rank), the new theories of ℓ1-minimization sparse recovery and nuclear-norm-minimization low-rank matrix completion enable recovery of the randomly missing or corrupted structural vibration response data. The performance of these two alternatives, in terms of recovery accuracy and computational time under different data missing rates, is investigated on a few structural vibration response data sets—the seismic responses of the super high-rise Canton Tower and the structural health monitoring accelerations of a real large-scale cable-stayed bridge. Encouraging results are obtained and the applicability and limitation of the presented methods are discussed.
Article
Buffer sizing is an important network configuration parameter that impacts the quality of service characteristics of data traffic. With falling memory costs and the fallacy that "more is better," network devices are being overprovisioned with large buffers. This may increase queueing delays experienced by a packet and subsequently impact stability of core protocols such as TCP. The problem has been studied extensively for wired networks. However, there is little work addressing the unique challenges of wireless environments such as time-varying channel capacity, variable packet inter-service time, and packet aggregation, among others. In this article we discuss these challenges, classify the current state-of-the-art solutions, discuss their limitations, and provide directions for future research in the area.
Article
Lossy transmission is a common problem for monitoring systems based on wireless sensors. Reliable communication protocols, which enhance communication reliability by repetitively transmitting unreceived packets, is one approach to tackle the problem of data loss. An alternative approach allows data loss to some extent and seeks to recover the lost data from an algorithmic point of view. Compressive sensing (CS) provides such a data loss recovery technique. This technique can be embedded into smart wireless sensors and effectively increases wireless communication reliability without retransmitting the data; the promise of this approach is to reduce communication and thus power savings. The basic idea of CS-based approach is that, instead of transmitting the raw signal acquired by the sensor, a transformed signal that is generated by projecting the raw signal onto a random matrix, is transmitted. Some data loss may occur during the transmission of this transformed signal. However, according to the theory of CS, the raw signal can be effectively reconstructed from the received incomplete transformed signal given that the raw signal is compressible in some basis and the data loss ratio is low. Specifically, this paper targets to provide accurate compensation for stationary and compressible acceleration signals obtained from structural health monitoring (SHM) systems with data loss ratio below 20%. This CS-based technique is implemented into the Imote2 smart sensor platform using the foundation of Illinois Structural Health Monitoring Project Service Tool-suite. To overcome the constraints of limited onboard resources of wireless sensor nodes, a method called random demodulator (RD) is employed to provide memory and power efficient construction of the random sampling matrix. Adaptation of RD sampling matrix is made to accommodate data loss in wireless transmission and meet the objectives of the data recovery. The embedded program is tested in a series of sensing and com- unication experiments. Examples and parametric study are presented to demonstrate the applicability of the embedded program as well as to show the efficacy of CS-based data loss recovery for real wireless SHM systems.
Article
Wireless sensor technology-based structural health monitoring (SHM) has been widely investigated recently. This paper proposes a fast-moving wireless sensing technique for the SHM of bridges along a highway or in a city in which the wireless sensor nodes are installed on the bridges to automatically acquire data, and a fast-moving vehicle with an onboard wireless base station periodically collects the data without interrupting traffic. For the fast-moving wireless sensing technique, the reliable wireless data transmission between the sensor nodes and the fast-moving base station is one of the key issues. In fast-moving states, the data packet loss rates during wireless data transmission between the moving base station and the sensor nodes will increase remarkably. In this paper, the data packets loss in the fast-moving states is first investigated through a series of experiments. To solve the data packets loss problem, the compressive sensing (CS)-based lost data recovery approach is proposed. A field test on a cable-stayed bridge is performed to further illustrate the data packet loss in the fast-moving wireless sensing technique and the ability of the CS-based approach for lost data recovery. The experimental and field test results indicate that the Doppler effect is the main reason causing data packet loss for the fast-moving wireless sensing technique, and the feasibility and efficiency of the CS-based lost data recovery approach are validated Copyright © 2014 John Wiley & Sons, Ltd.
Article
A structural health monitoring (SHM) system provides an efficient way to diagnose the condition of critical and large-scale structures such as long-span bridges. With the development of SHM techniques, numerous condition assessment and damage diagnosis methods have been developed to monitor the evolution of deterioration and long-term structural performance of such structures, as well as to conduct rapid damage and post-disaster assessments. However, the condition assessment and the damage detection methods described in the literature are usually validated by numerical simulation and/or laboratory testing of small-scale structures with assumed deterioration models and artificial damage, which makes the comparison of different methods invalid and unconvincing to a certain extent. This paper presents a full-scale bridge benchmark problem organized by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. The benchmark bridge structure, the SHM system, the finite element model of the bridge, and the monitored data are presented in detail. Focusing on two critical and vulnerable components of cable-stayed bridges, two benchmark problems are proposed on the basis of the field monitoring data from the full-scale bridge, that is, condition assessment of stay cables (Benchmark Problem 1) and damage detection of bridge girders (Benchmark Problem 2). For Benchmark Problem 1, the monitored cable stresses and the fatigue properties of the deteriorated steel wires and cables are presented. The fatigue life prediction model and the residual fatigue life assessment of the cables are the foci of this problem. For Benchmark Problem 2, several damage patterns were observed for the cable-stayed bridge. The acceleration time histories, together with the environmental conditions during the damage development process of the bridge, are provided. Researchers are encouraged to detect and to localize the damage and the damage development process. All the datasets and detailed descriptions, including the cable stresses, the acceleration datasets, and the finite element model, are available on the Structural Monitoring and Control website (http://smc.hit.edu.cn). Copyright © 2013 John Wiley & Sons, Ltd.
Article
The declining state of civil infrastructure has motivated researchers to seek effective methods for real-time structural health monitoring (SHM). Decentralized computing and data aggregation employing smart sensors allow the deployment of a dense array of sensors throughout a structure. The Imote2, developed by Intel, provides enhanced computation and communication resources that allow demanding sensor network applications, such as SHM of civil infrastructure, to be supported. This study explores the development of a versatile Imote2 sensor board with onboard signal processing specifically designed for the demands of SHM applications. The components of the accelerometer board have been carefully selected to allow for the low-noise and high resolution data acquisition that is necessary to successfully implement SHM algorithms.
Article
In structural health monitoring (SHM) of civil structures, data compression is often needed to reduce the cost of data transfer and storage, because of the large volumes of sensor data generated from the monitoring system. The traditional framework for data compression is to first sample the full signal and, then to compress it. Recently, a new data compression method named compressive sampling (CS) that can acquire the data directly in compressed form by using special sensors has been presented. In this article, the potential of CS for data compression of vibration data is investigated using simulation of the CS sensor algorithm. For reconstruction of the signal, both wavelet and Fourier orthogonal bases are examined. The acceleration data collected from the SHM system of Shandong Binzhou Yellow River Highway Bridge is used to analyze the data compression ability of CS. For comparison, both the wavelet-based and Huffman coding methods are employed to compress the data. The results show that the values of compression ratios achieved using CS are not high, because the vibration data used in SHM of civil structures are not naturally sparse in the chosen bases.
Conference Paper
Providing effective real-time speech transmission with acceptable quality for VoIP applications and mobile and wireless speech communication is a very important and challenging task. In these communication networks, the bad situations with heavy packet loss happen from time to time. In this paper, a packet loss concealment scheme based on the compressed sensing technology is proposed. It only uses the available speech samples to recover the lost ones by mean of compressed sensing, so it is independent of speech codec algorithms; therefore it can be used in any speech communication systems in receiver side. Evaluation results have shown that the performance of it is still good even when packet loss rate is 40% or higher, it is also good for both waveform and parameter based speech coding applications.
Article
Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analog-to-digital converters and digital imagers in certain applications. To date, most of the CS literature has been devoted to studying the recovery of sparse signals from a small number of linear measurements. In this paper, we study more practical CS systems where the measurements are quantized to a finite number of bits; in such systems some of the measurements typically saturate, causing significant nonlinearity and potentially unbounded errors. We develop two general approaches to sparse signal recovery in the face of saturation error. The first approach merely rejects saturated measurements; the second approach factors them into a conventional CS recovery algorithm via convex consistency constraints. To prove that both approaches are capable of stable signal recovery, we exploit the heretofore relatively unexplored property that many CS measurement systems are democratic, in that each measurement carries roughly the same amount of information about the signal being acquired. A series of computational experiments indicate that the signal acquisition error is minimized when a significant fraction of the CS measurements is allowed to saturate (10–30% in our experiments). This challenges the conventional wisdom of both conventional sampling and CS.
Article
This tutorial presents a detailed study of sensor faults that occur in deployed sensor networks and a systematic approach to model these faults. We begin by reviewing the fault detection literature for sensor networks. We draw from current literature, our own experience, and data collected from scientific deployments to develop a set of commonly used features useful in detecting and diagnosing sensor faults. We use this feature set to systematically define commonly observed faults, and provide examples of each of these faults from sensor data collected at recent deployments. Categories and Subject Descriptors: B.8.1 (Reliability, Testing, and Fault-Tolerance ): Fault tolerance and diagnostics, Learning of models from data; C.4 (Performance of Systems): Fault tolerance, Reliability, availability, and serviceability
Article
Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired resolution of the image, i.e. the number of pixels in the image. This paper surveys an emerging theory which goes by the name of �compressive sampling� or �compressed sensing,� and which says that this conventional wisdom is inaccurate. Perhaps surprisingly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, e.g. the number of pixels in the image. It is believed that compressive sampling has far reaching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simultaneously. In this short survey, we provide some of the key mathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science.
Article
In this work, we propose an effective application layer solution for packet loss mitigation in the context of Body Sensor Networks (BSN) and healthcare telemetry. Packet losses occur due to many reasons including excessive path loss, interference from other wireless systems, handoffs, congestion, system loading, etc. A call for action is in order, as packet losses can have extremely adverse impact on many healthcare applications relying on BAN and WAN technologies. Our approach for packet loss mitigation is based on Compressed Sensing (CS), an emerging signal processing concept, wherein significantly fewer sensor measurements than that suggested by Shannon/Nyquist sampling theorem can be used to recover signals with arbitrarily fine resolution. We present simulation results demonstrating graceful degradation of performance with increasing packet loss rate. We also compare the proposed approach with retransmissions. The CS based packet loss mitigation approach was found to maintain up to 99% beat-detection accuracy at packet loss rates of 20%, with a constant latency of less than 2.5 seconds.
Conference Paper
Since high error rates are inevitable to the wireless environment, energy-efficient error-control is an important issue for mobile computing systems. We have studied the energy efficiency of two different error correction mechanisms and have measured the efficiency of an implementation in software. We show that it is not sufficient to concentrate on the energy efficiency of error control mechanisms only, but the required extra energy consumed by the wireless interface should be incorporated as well. A model is presented that can be used to determine an energy-efficient error correction scheme of a minimal system consisting of a general purpose processor and a wireless interface. As an example we have determined these error correction parameters on two systems with a WaveLAN interface
Article
Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.
Sparse signal recovery with exponentialfamily noise
  • Rish
Rish, I., & Grabarnik, G. (2014). "Sparse signal recovery with exponential-family noise." Compressed Sensing & Sparse Filtering, Springer, pp. 77-93.
Robust Bayesian compressive sensing with data loss recovery for structural health monitoring signals
  • Y Huang
  • J L Beck
  • S Wu
  • Y. Huang
SPGL1: A solver for large-scale sparse reconstruction
  • E Van Den Berg
  • M P Friedlander
  • E. Berg Van den
Compressive sampling for accelerometer signals in structural health monitoring
  • Y Bao
  • J L Beck
  • Y. Bao
Packet loss mitigation in transmission of biomedical signals for healthcare and fitness applications, Pub
  • P K Baheti
  • H Garudadri
  • P. K. Baheti
An introduction to compressive sampling
  • Candès